In the course of my daily life I interact with two very different perspectives on AI, each of which seem to think the other side is delusional, ignorant, or acting in deliberate bad faith. This is difficult for me, because I actually think both perspectives are based on true observations and have valid points. At the same time, the argument is so polarized that it's hard to reason against the parts I think are mistaken without seeming like I'm supporting the other side. There are also real dangers of AI as it exists that aren't seriously considered by either side within this framing.
Two views on AI
In their most extreme forms:
Snake Oil View: AI is an unreliable machine for generating plagiarism and lies, a copyright-launderer that steals real humans' work and turns it into money poured into giant AI companies. It gets shoved places it has no business being, at best due to people buying into false hype, at worst a deliberate scam. It does all this at a massive cost of energy and other human and environmental resources.
Superintelligence View: AI is a paradigm shift of technology. It can already do better work than humans in many cases, and that will only accelerate by applying AI to AI research. In a matter of a few years, the world will be unrecognizable, and we need to think hard about how to make sure the superintelligent machines will be on our side, and what humanity is going to do once all our jobs are obsolete.
One side is reflexively anti anything with the word “AI” attached; the other thinks everyone should be preparing for the upcoming AI singularity.
A different view
This is why I was excited to read AI as Normal Technology, an well-researched essay published by the Knight First Amendment Institute at Columbia University, which attempts to stake out a third perspective: AI is a powerful and transformative technology, akin to electricity or the internet, which is very likely to have a large impact on society, but we are not approaching a sci-fi future full of autonomous AI entities with human-like intelligence any time soon.
I highly recommend you read the whole thing; I don't agree with everything in it, but it does hit some major points that I think about a lot. I'm going to go through those here.
"Intelligence" is an incoherent concept
I've talked about this in the context of "meritocracy," but I also think it's very interesting to examine critically in the context of AI.
From the paper:
On a conceptual level, intelligence—especially as a comparison between different species—is not well defined, let alone measurable on a one-dimensional scale.
What does it mean for an AI to be "more intelligent" than a human? There are some tasks AI can already outperform humans on, and some it cannot. Humans themselves have specializations, and AI will as well. Rather than "intelligence", the authors prefer talking about "capabilities" - actual tasks the AI is capable of performing.
"Capability" does not imply power or reliability
AI is already highly capable of certain tasks! However, just being hypothetically capable of doing something does not mean you can actually execute on it. AI has to be given ways to interact with the world, and those ways are restricted in the same ways a human might be restricted. Having an AI capable of driving a car is different from the road being full of cars driven by AI. Even if we have the first thing, the latter thing will take years or decades, and this applies in different ways to all fields it can be applied in.
One reason the AI's power might be restricted is reliability - especially in safety-critical applications, people aren't likely to hand over the keys to AI.
I liked the paper's example to refute the classic "paperclip maximizer" thought experiment:
The concern is that the AI will take the goal literally: It will realize that acquiring power and influence in the world and taking control over all of the world’s resources will help it to achieve that goal. Once it is all powerful, it might commandeer all of the world’s resources, including those needed for humanity’s survival, to produce paperclips.
[...]
Consider a simpler case: A robot is asked to "get paperclips from the store as quickly as possible." A system that interpreted this literally might ignore traffic laws or attempt theft. Such behavior would lead to immediate shutdown and redesign. The path to adoption inherently requires demonstrating appropriate behavior in increasingly consequential situations. This is not a lucky accident, but is a fundamental feature of how organizations adopt technology.
Even assuming we have an incredibly capable AI we assign to producing paperclips, it's difficult to imagine it being given the level of control needed to have that kind of catastrophic outcome.
Knowledge isn't always accessible to AI
From the paper:
In general, much knowledge is tacit in organizations and is not written down, much less in a form that can be learned passively. This means that these developmental feedback loops will have to happen in each sector and, for more complex tasks, may even need to occur separately in different organizations, limiting opportunities for rapid, parallel learning. Other reasons why parallel learning might be limited are privacy concerns: Organizations and individuals might be averse to sharing sensitive data with AI companies, and regulations might limit what kinds of data can be shared with third parties in contexts such as healthcare.
AI is incredibly good at extracting knowledge from whatever data you give it. But it's not magic - it can’t produce knowledge from nothing. The supporting knowledge has to exist in the world, in a form that it can access.
I've heard numerous pitches for AI products that don't seem to realize this: For example, one pitch was an AI system that can evaluate the value of an engineer's actual contributions to a project, beyond just code. That sounds great, but what is the AI looking at to decide that? Is every design review meeting, hallway covnersation, and mentorship meeting recorded? All the whiteboard diagrams artifacts stored? Are the business outcomes measured in a logical way, and fed back into the system?
This also touches on the next point...
Task definition is hard
From the paper:
As anyone who has tried to outsource software or product development knows, unambiguously specifying what is desired turns out to be a surprisingly big part of the overall effort!
What is "the value of an engineer's contribution"? Is it the amount of influence they had over the final state of the system? The actual business value of the product launched? Knowledge imparted to others in the organization, via coaching and mentorship? Some combination of those? Should one be weighted more than others? Is is zero-sum, where the value generated by a team must be distributed among its members?
AI cannot solve this, because there's no "right answer" to this question. It depends on the outcomes you want to incentivize, the accuracy you need, the timeline in which you need to see results, and more. Once you thoroughly specify a task, AI can help - but often at that point you've already done most of the work.
"AI Safety" focus on catastrophic risks ignores real, existing dangers
From the paper:
There is a long list of AI risks [...] which are nonetheless large-scale and systemic, transcending the immediate effects of any particular AI system. These include the systemic entrenchment of bias and discrimination, massive job losses in specific occupations, worsening labor conditions, increasing inequality, concentration of power, erosion of social trust, pollution of the information ecosystem, decline of the free press, democratic backsliding, mass surveillance, and enabling authoritarianism.
If AI is normal technology, these risks become far more important than the catastrophic ones discussed above. That is because these risks arise from people and organizations using AI to advance their own interests, with AI merely serving as an amplifier of existing instabilities in our society.
Or, more succinctly:
We are more concerned about risks that arise from people using AI for their own ends, whether terrorism, or cyberwarfare, or undermining democracy, or simply—and most commonly—extractive capitalistic practices that magnify inequalities.
"AI safety" often focuses on catastrophic risks like the paperclip maximizer, while AI demonstrably is already causing major harms in other ways.
It is factually true that the current paradigm of AI is transferring value from humans doing creative knowledge work like artists and journalists to AI companies building systems that mine their data. It's true that jobs are being lost as automation increases. It's true that AI is allowing the wide proliferation of non-consensual pornography and deepfake scams. Those are real concerns we need to address now, versus the hypothetical concern of a rogue AI.
Other points
Some other points the paper I won't go in depth into because i don't have a personal take right now, but i thought were interesting:
Statistics on rapid AI adoption are slanted, and there's not a lot of evidence AI is being adopted any faster than any other technology.
Benchmarks measure the things that are easy to measure, which are not coincidentally the things AI is most good at. Passing the bar exam is a very different task than being a lawyer.
Rather than just trying to ensure models are philosophically "aligned" with human needs, it makes a lot more sense to put in downstream protections against error/misuse.
AI can be used for "defense" - improving safety in various ways - in addition to being a risk
Governments should consider funding the arts and journalism through taxes on AI companies
As I mentioned, I do recommend reading the whole thing. At the very least, it will hopefully offer some new things to think about!