How Intel is quietly gearing up to become a player in the AI arms race
Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.
Intel’s quiet role in the AI boom
Intel isn’t often mentioned as one of the main players in the generative AI boom. While Intel made the chips that drove the personal computing revolution, it’s rival company Nvidia that has supplied the graphics processing units (GPUs) that power the models underpinning tools like ChatGPT and Midjourney. But the Nvidia chips are in short supply, and their prices are rising. Relying so much on one company to power large models is a situation nobody likes (except Nvidia).
In the real world, enterprises are using a variety of AI models, many of them smaller and more specialized than frontier models like OpenAI’s GPT-4 and Meta’s Llama. Reluctant to send their proprietary data out to third-party models, many enterprises are building their own models using open source code and hosting them on their own servers. Intel believes its chips are well suited to running these smaller, homegrown models.
Intel has been working on making AI models run more efficiently on its central processing units (CPUs) and hardware accelerators. In fact, the 55-year-old company just tested its hardware running some popular generative AI models against the commonly used MLPerf performance benchmarks, and the results are competitive (and improving). It’s also been working with an AI platform called Numenta, which takes lessons from neuroscience to improve the performance and power efficiency of AI models. Together, the two entities are developing tech to make AI models run more efficiently on Intel’s Xeon CPUs.
In this first flash of light from the arrival of a new way of computing, companies are sparing no expense to get their first generative AI models trained and into operation. But as time goes on, enterprises will naturally focus more on controlling costs. Intel wants to be ready with an answer when the time comes.
In defense of Humane’s much-hyped AI wearable
Humane recently unveiled its personal AI device, the AI Pin, a small AI-enabled square that pins to a lapel. It’s priced similarly to a smartphone at $699, and requires a wireless subscription. It has no screen; the primary means of controlling the device is by talking to it. For vital visual information, it projects images onto the user’s palm.
The core idea of the Pin is providing a personalized AI agent that’s an expert on you and is always there with you to help. Humane says the Pin is “the embodiment of our vision to integrate AI into the fabric of daily life.” But so far, it’s been getting mixed reviews. Critics have found the device a bit awkward to use. And the device can’t yet train its AI model on the user’s email, calendar, and documents. (On that latter point, the company is planning to provide a self-service kit to developers to bring all kinds of specialized knowledge into the Pin.)
And yet, I’m still a fan. There’s a lot of energy and hype around the Pin, and it’s showing up just when the world is beginning to embrace the next big thing in personal computing: agents. At least Bill Gates thinks so: “In short, agents will be able to help with virtually any activity and any area of life,” he wrote in a recent blog post. “Agents will be the next platform.” Of course, it’s very possible that these agents will be contained within smartphones. I hope not. Part of the idea of a dedicated, hands-free AI device is allowing people to look up, to stay engaged in the real world.
Building my first GPT
OpenAI last week launched its GPTs, which are like user-friendly versions of ChatGPT that can be personalized and trained. This week, I opened the GPT Builder (in beta) and created a GPT called MarkWrites, which I hoped I could train to write a news story in my own writing style. My GPT yielded mixed results.
Creating and training my GPT was easy. The GPT Builder tool helped me find the MarkWrites name and create a little logo of a quill and inkwell. I then instructed it to go to Fast Company’s website and read my articles to learn my writing style. No problem, it said. I told it to write concise sentences that contain only one thought, or two related thoughts. I told it to check facts using the internet. No problem, it said. Then it began prodding me to try out my new GPT in the “playground” that takes up the entire right side of the Builder interface.
Over in the playground, I uploaded a recent press release and told the newly born GPT to make a news story out of it. On the first try, it produced a point-by-point recreation of the document—not exactly discerning journalism. I told the GPT to use the “inverted pyramid” style to create a proper news story (meaning putting the most important information up top). It apparently understood, and about 20 seconds later responded with a news story with a traditional “lede” paragraph followed by a series of supporting paragraphs. The writing was a bit wooden, and I couldn’t really see any signs of my own style (or maybe my writing style is wooden!) but overall it read pretty well.
But to my surprise, the main problem was a familiar one for AI chatbots. Instead of simply relaying the numbers in the press release into the story, it changed one crucial one, hallucinating another in its place. I will run more experiments, perhaps prompting the GPT more explicitly about learning my writing style and checking facts. But for now at least, I think my journalism job is safe.
(16)