Four Demands Consumers Should be Making of the Big Tech Players Battling Over AI
As Apple, Microsoft, Google, X, Meta, Claude and others compete over AI it is important we as consumers focus on key demands we should be making
A TV News clip about the AI battle of the bots in which I comment on developments.
Apple has just entered the AI ‘battle of the bots’.
The ‘battle of the bots’ continues regarding widely available public-facing AI. Just last week, Apple announced what it’s Apple Intelligence will look like. Major players in the battle include:
Open AI with ChatGPT
Microsoft using a version of ChatGPT
Google with Gemini
Meta with LLaMA
Twitter with Grok
Some other AI players such as Claude
Now we have Apple using some of its own AI models but also announcing that it will also be using Open AI’s ChatGPT.
Competitive battles are generally good for consumers
No one knows who will win the AI battle of the bots. Presumably, there will be a mix of AI models, but one or more may end up being dominant. At the moment, ChatGPT looks like a good contender for a significant part in the AI ecosystem.
In addition to the basic question of which AI is best at doing different tasks, there is also an ideological angle on the chatbot question that may push towards an ecosystem with a range of models. Elon Musk, with X’s Grok, has made the point that there is a demand for different ideological flavors of chatbots. Some consumers will want ideological differentiation, which will either be met by different chatbots or by users being able to specify the ideological orientation of a chatbot they are using. In my new book Surfing AI, I talk about how the culture wars will likely focus on chatbots and related AI systems as they become central to how we obtain information about the world and, as a result, how we view the world.
Useful to distinguish between the underlying AI model and the provider that delivers one or more of them to us
You can have more than one AI model being delivered by a single provider or platform as in the case of Apple, which will use some of its own AI models while also using ChatGPT. Other products, such as Shortwave, which works with your gmail account, use multiple different AI models behind the scenes. We also are seeing some platforms offering the user a choice between different AI models. For example, Websim.ai, which I discussed recently.
Apple wants AI to be seen as friendly, not threatening and is also leveraging its reputation for privacy and security
Apple is calling their AI Apple Intelligence rather than AI presumably, this is to move away from any negative connotations associated with artificial intelligence. They want us to think that it is friendly. Of course no one knows if it is going to end up friendly or not. They will also try to make a selling point of Apple’s focus on privacy and security. They will make it run on your device as much as possible rather than in the cloud. And they will let the user know when you are connected to it in the cloud as when your iPhone offers you the option of connecting to ChatGPT. With AI, privacy and security are something that consumers should be thinking about and are already being widely discussed. But there are some other key demands that we as consumers should be making from Big Tech as it pushes on with the AI revolution.
What should consumers be focusing on?
In general, we can just let the competition play out and presume that AI models will improve as a result of it. One issue I’ve already mentioned is privacy and security. Bias and just getting things plain wrong sometimes is another one. And, of course, the elephant in the room is that the frenzied competition that is driving AI means it is developing so fast that, in the long run (whenever that is), it may become too powerful for humanity to manage. These three issues are currently getting a fair amount of air time. However there are four other things that I think consumers should also be demanding from Big Tech regarding AI.
Do we want to continue being the product, not the customer?
The social media business model ended up with us being the product, not the customer. In my opinion we are in for tears if we let this model also dominate the delivery of AI. Regarding AI, the stakes are actually even higher than regarding traditional social media. There is no such thing as a free lunch and social media are machines designed to meet others’ goals—advertisers selling us stuff. This is where most of the problems of social media come from. So, if we want AI to work for us rather than for advertisers, we obviously need to pay for it ourselves as consumers. Many people think that free-to-us social media is getting us to buy and do stuff we do not want. However, regarding coercive ability compared to current social media, AI-powered social media and other AI systems are like comparing a kindly kindergarten teacher suggesting that we do something to a 200-kilo gorilla with a cattle prodder.
It might be better to be interacting with AI than with people in some situations.
There is concern about people interacting with AI, deep fakes, and what I think of as the tsunami of infotrash that is coming our way. And there is every reason to be freaked out about this. However, there is an interesting development in social media that some platforms are offering. ‘AI users’. They appear in the same format as human users (but are labeled as AI Bots). I think that providing we get control of social media, as discussed above, this development could be positive in some ways. One can argue that it’s preferable for your kids to be interacting with an AI user on social media. This is if you, as a parent, get to select the type of AI user they are interacting with, and you, therefore, know it will generally act in certain ways. This is in contrast to the current situation where your kids are interacting with random human strangers, and you have no idea what they are teaching them about life.
AI’s environmental footprint.
AI chews up lots of energy. And if everything is going to be AI-powered, consumers need to be demanding that AI companies are completely transparent about their energy use so that they can reward those that are greener than others.
Avoid being locked in.
Increasingly, we will be customizing or training AI models we use so that they know about us and can be more helpful. To the extent that this involves our time interacting with them then we will be getting locked into a particular underlying AI model. I remember last year when Open AI had some problems with its Board it made me think about the investment I had already made in developing some Custom GPTs. We should teach an AI model to do what we want it to do as something like forming a friendship or raising a child. This is the question of portability. We have failed to achieve this with social media. For instance, for many professionals we have an enormous investment in Linkedin in terms of the messages and information about our work networks. In regard to AI, we should be keeping an eye on the question of how much we would lose if we had to move to another underlying model. We should be pressuring companies to be transparent about this issue, and ideally, if there are technical ways that portability could be made easier, they should be offering us these.