Can we, pretty please, quickly get over the AI hype without too much collateral damage?

Image credit: An unsurprisingly bad impression created with that image parrot by a supposedly open company creating closed large models scraped from the open web using the prompt ‘an impression about the collateral damage caused by the current AI hype and the massive misinformation and energy consumption it causes with little utility for the world, in the style of landscape posters, no text besides “AI” in big letters’ and giving up after 5 iterations of prompts

We (mostly) seem to have made it through the Blockchain and cryptoscam hype with only medium collateral damage to most of the population (highly profitable ransomware and burning much more dirty coal than would otherwise have happened), but unfortunately at the cost of stumbling right into the next one. I would hope that we could get that one over with with less collateral damage, but I am not optimistic at this point.

Which damage do I fear?

  • Again, massive energy consumption through training too-large models on too much data. Most probably not on the level of Blockchain proof-of-work idiocy, but still too much for our already-unstable environment.
  • Further, potentially irreversible erosion of public trust in online information because of verified facts being drowned in the to-be-expected, heavy industrialization of fake information production in all types of media.

What can we do to mitigate that collateral damage?

  • Stop calling it “Artificial Intelligence”. Stop calling its errors “hallucinations”. Stop speculating about “sentience” or “consciousness”. We are talking about Large Language Models (LLMs) as a very specific sub-field of Machine Learning (ML) right now. Nothing else.
  • Instead of blindly training and playing with huge models treated as black boxes, invest much more in human understanding of what actually goes on inside – whitebox, interpretable, deterministic, repeatable, and reproducible behavior of tools we intend to build applications on. Instead of being baffled by seemingly emergent properties, maybe ask more deeply why those properties emerge? Instead of slapping it onto every product/service you can reach, maybe ask if that particular use case actually benefits from adding a stochastic parrot?
  • Stop neglecting basic security and privacy principles just because “this is a new thing”. It’s not. Security means separation of code (initial prompts with priming statements) and data (untrusted user input) instead of mangling them all together. Privacy means well-defined notions like unlinkability, anonymity without the possibility to re-identify, and informed consent before ever touching any personal data in the first place.
  • It probably means again paying for good journalism. You know, actual reporters cross-checking and verifying their supposed facts before shouting them out.
  • And yes, maybe it means regulation. China regulating Bitcoin mining had a positive global effect on its energy consumption (it’s still a dumpster fire, but slightly less bad than it was before). Ethereum abandoning proof-of-work is the only real sign of light in that pit. Regulating ML is probably what it will take to mitigate some of the worst effects of this hype, but I fear that such kind of regulation will come much later than the damage.

A bit of hypocrisy here?

Wait, you hear me saying that I don’t like the Blockchain and ML hypes, but at the same time argue we need more funding and people working on security and privacy? Isn’t that also a bit of a hype these days in the tech as well as daily press? Yes, it can certainly be seen as a hype, and I have given many, many public talks on these topics in the last 5+ years (sorry if you heard me speak multiple times on related topics – it’s not a low of new stuff from one public talk to another).

What’s the difference? I don’t think security and privacy (I see them as two sides of the same coin and therefore refer to the whole in the singular) is a hype just because it is being talked about more. That would be a bit like claiming climate change and environmental concerns are a hype right now. They aren’t. It’s the fact that we collectively ignored well- and long-known problems for so long that they are now becoming really urgent. If that feels like a hype, I can assure you it is quite different. With climate change and computer/network security/privacy discussions1, we are talking about real problems for which experts have been looking hard for working solutions for some time. With Blockchain and ML, we are looking at solutions that are frantically searching for new problems to solve.


  1. I do not in the slightest claim that security and privacy are as important as climate change. Some “AI godfather” may believe solving climate change is easy, and that addressing ML-created problems is hard. Climate change is the hard part, because that decides our and many other species’ survival. Security isn’t easy either, because networked systems are already so deeply embedded into our daily lives that it can also become live-threating. But it is much less urgent, and the fallout of security failures is not at all comparable to the climate crisis. Talking about Blockchain or AI doesn’t even register in comparison. ↩︎