Artificial Intelligence is a Microwave
The following is a response I wrote to a text message from a friend. Over the years, he has reached out to discuss tech topics. Occasionally, my musings are too lengthy for a terse phone reply. For those, I tend to type up a GitHub gist and send a link.
"I didn't have time to write a short letter, so I wrote a long one instead."
Recently he asked for recommendations for who to follow on LinkedIn to learn more about artificial intelligence (AI). I figured, since I already wrote this in Markdown format I might as well make it a blog post. Perhaps others might enjoy reading it as well.
Q: "I love AI stuff, but who do you follow for keeping up with how fast it is changing?"
These folks immediately come to mind.
Dr. Emily M. Bender
Coauthor of The AI ConDr. Yann LeCun
VP & Chief AI Scientist at MetaMorten Rand-Hendriksen
Principal Instructor at LinkedIn
Bender and Maggiori both hold contrarian viewpoints, and often call out disingenuous takes on AI. For example: AGI, the notion that self-aware machine consciousness is right around the corner.
LeCun takes a long view, and has been involved in AI since the early days before it was a buzzword. His posts tend to lean more toward scientific aspects. He recently pointed out that AI has jumped the shark.
Rand-Hendriksen tends to post short-form videos to clarify terminology and challenge assumptions about AI. For example, how it is essentially beefed up autocomplete without any real "thinking" taking place.
Self-promotional plug. I wrote a few thoughts on AI here.
For a longer rant, read on…
All of what I am about to say has been gleaned while generally trying to avoid the topic of AI. Yet it keeps coming up, so I sift for truth in the fluff.
Firstly, I think AI hype is a bubble.
How it started:
How it's going:
Seemingly overnight, hucksters who were pushing cryptocurrency and NTFs (during the pandemic) hard-pivoted and rebranded as AI experts. 99% of it is snake oil.
On its face AI is a technical marvel. Not to take away from the brilliant work of those who brought it to market. I have simply grown weary of oversaturated tech media coverage, as if we are about to unveil Ultron or Supremor.
I think of AI as the invention of the microwave oven.
The marketing pitch was not that microwaves would obviate stoves or ovens, but would rather play a complementary role. Now most kitchens have all three.
With AI however, the messaging is that it will replace chefs entirely. Which is of course absurd, assuming humans are still recipients of the entrées.
There are definitely things for which a microwave is better suited. But if you have the time and patience, the end result is improved when cooking more methodically.
Fairly soon — if we are not there already — AI will hit a point of ubiquity, where the availability and output becomes so consistent and predictable as to be unremarkable. I am reminded of this quote.
"When art critics get together they talk about form, structure, and meaning."
"When artists get together they talk about where you can buy cheap turpentine."
Most of the AI discussions I have with other software devs metaphorically equate to: "Why the popcorn is burned."
Though I have no particular insider info, I would not be surprised if recent Microsoft layoffs were a veiled cost-cutting measure to funnel money into scaling AI. Usage has gone mainstream. That equates to greater investments in infrastructure.
It would make sense to charge for AI products in order to offset expenses, as Anthropic and OpenAI are doing. Given that the big players — Google, Meta, Microsoft — are still in the "gain as many users as possible" phase of enshittifcation, we have not yet seen them attempt to extract maximal value from users.
I do not think AI is getting "smarter" at the exponential rates we saw initially in 2022. The general consensus I am hearing is AI has plateaued. Which is not a failure of research nor training methods. Rather, we have already fed the entirety of digitized human knowledge and experience into various LLMs and achieved similar results.
Beyond that, we have reached a cap of what source material remains to be ingested. Another significant breakthrough would require either a novel technological approach or a comparably massive "Gutenberg until now" level data set. But we do not have fresh multi-millennial sized histories readily sitting around.