Who will regulate AI?
Back in high school, I took an ethics class where we explored what it means to be an ethical coder and how to contribute positively to the technical community. At that time, the App Store was a relatively new phenomenon, and we began to see some questionable behaviors from developers. Coding had become so accessible that anyone, including kids in their basements, could create apps, sometimes without a sense of morality.
One notable example was an app on the Apple App Store that sold virtual jewels for $999. It promised rubies, diamonds, and other precious stones, but all it provided was a simple image of these jewels. Astonishingly, a few people bought it before it was exposed as a scam. This incident led Apple to enforce stricter guidelines, but it highlighted a significant issue: new technologies often lack initial regulations, and it's only after people get hurt or scammed that protective rules are implemented. By then, the damage is often done.
Now, imagine this scenario on a larger scale with AI. Anyone can access large language models (LLMs), and the potential for misuse is enormous. People can exploit AI to further their interests, often at the expense of others. This trend is not new in the tech world. Every time a new technology emerges, it initially seems magical until someone finds a way to misuse it.
We often turn to corporations and governments for protection, but this approach has its downsides. By relying on them, we give up a lot of our agency and allow these entities to dictate how technology should be used. This loss of control can have long-term consequences, including increased costs and reduced freedom in how we use technology.
It's crucial that we, as users and developers, understand AI well enough to make informed decisions. If corporations push out AI-driven features, we should be able to evaluate and respond to them. We can use our purchasing power, creativity, and ingenuity to influence how AI is used and ensure it aligns with our values.
Some people are resisting AI, claiming it's unreliable and preferring to do things themselves. While this resistance is understandable, it's also counterproductive. AI is here to stay, and refusing to engage with it only makes us more vulnerable to its changes. Learning to use AI is akin to learning to use the internet; it's essential for keeping up in today's world.
Moreover, understanding AI allows us to hold companies accountable for how they use it. AI should not be used for sensitive decisions, such as medical claims or determining a person's financial worth. These decisions are complex and nuanced, and AI often lacks the ability to handle such subtleties. When AI makes mistakes, it's easy for corporations to hide behind it, avoiding accountability. Don’t misunderstand me; I am not opposed to corporations using AI. I am simply advocating for personal responsibility and a stake in shaping how AI should be used. It should not be left up to the imaginations of corporations alone.
If we fail to understand and engage with AI, companies will integrate it into important areas of our lives without our input. This can lead to reduced human capacity and less accountability, all in the name of increased profits. But this is only half the threat. Anyone can get hold of AI tools and use them to take advantage of others. If someone is capable of causing significant harm, at what point will regulations appear? Who will be the ones defining the regulations? Will we, the people, simply sit back and wait for politicians and regulators to protect us? And at what cost? What do we give up for this protection? One thing I know for certain is that corporations will ensure they are on the lenient side of the policies.
It's essential that we make our voices heard and influence how AI is integrated into society. Governments may step in to regulate AI, but we should aim to be proactive rather than reactive. We need to be involved in shaping the future of AI to ensure it serves us all.
What are your thoughts? Do you think I'm being too pessimistic, or am I saying the quiet part out loud?