Article

The Risks and Opportunities of AI

AI
February 25, 2024
|
by Iwein Fuld
AI Technology Risks
AI Opportunities
Ethical AI Use
Technological Singularity
AI and Innovation
AI

The singularity, a word that was very niche two years ago, suddenly made all the headlines when OpenAI published ChatGPT to the masses in 2022. The technological singularity is the point in time where we lose control over tech and are unable to stop it.

This definition is not equal to the depiction of many science fiction works where AI becomes sentient, and either maliciously, or benevolently changes the course of humanity without consent. A self-aware AI, with abilities surpassing humans, does not exist as far as I know. But we may already have passed the technological singularity as it was defined. Who can stop the technological beast we have created that is eating our planet? It seems to me that we've been having a pretty hard time for the last fifty years or more already. That aside, I will focus on AI in this article, and leave the planetary crises and science fiction alone where I can.

AI is not a new concept by any stretch. The AI timeline on Wikipedia stretches back more than a millennium. People have long been intrigued or obsessed even by creating intelligence. It predates computers and even science. The current AI hype has a lot of such obsession to it, and it is important to understand what is real.

The timeline explodes when OpenAI makes its move. Everybody was on the bandwagon instantly, and of course we polished up our machine learning skills and got our hands dirty as well. What tools like DALL-E and ChatGPT can do is impressive, but it is not really coming out of nowhere, like some new to the concepts seem to believe.

Around the start of this century one of the majors at my university got renamed from "Kunstmatige intelligentie" to "AI" for no other reason than marketing (it's just translation). This science had been around for more than a decade by then, as in the nineties AI changed from wishful thinking and talking to actual experiments and theory. Well before that happened, some scary science fiction, like The Terminator, was written up. A lot of the FUD surrounding AI is coming from such fantasies, and isn't based in reality at all. Back 20 years we were playing with neural networks, genetic algorithms and most of the jazz needed to evolve the current LLM's. Of course the theory got refined significantly, but the most important breakthrough we noticed is triggered by raw computing power, not science.

A friend of mine looked at ChatGPT and said it's just autocomplete. Before ChatGPT came out several of our members worked with copilot, and called it helpful, but stupid. ChatGPT is exactly that, a little helpful, but very stupid. Here, I'll show you.

ChatGPT screenshot
ChatGPT screenshot

Now at a first glance, the writing of ChatGPT seems more professional, has less typo's, and gives you a better feeling. But it fucked up the point I wanted to make beyond recognition. That's _not_ what I asked it to do. I like to be brutally honest, and to the point, and maybe I could tweak my prompts to get it to do that, but I could more easily use a tool like Hemmingway and check my words against fallacyfiles.org to increase the quality. I'm sure it would result in a better read than what ChatGPT has to offer here.

What AI can and can't do

Now Hemmingway introduced AI into the tool as well. And we're still using copilot. Spell checkers and autocomplete are useful, and making them smarter is essentially a good idea. You don't want to make them invasive and chatty, like the paperclip we used to have in Word a long time ago (here's an old meme by James Web to freshen up your memory).

I'm curiously watching what LinkedIn is doing with their advice product, which is an AI assisted crowdsourced knowledge base. Merging the individual author input into one readable article is tedious work, that a good LLM might be better at than humans. The heavy human feedback creates guardrails that help avoid the AI farting all kinds of random hallucinations into your brain.

I think that this is essential: use human feedback to keep it real. 

What you should and shouldn't use AI for

So far I've painted a picture and given some examples that help understand the possibilities a bit better. Now let's look at the practical implications of that. I'll show you some do's and don'ts but rest assured they're just a few examples. 

Don't use AI to bloat and obfuscate  

As you might have noticed in the example above, ChatGPT turns somewhat readable text into low quality clickbait. I don't know exactly why, but I think the current web (which was its training data) is full of content that was groomed, mostly by Google, too look interesting, and have the reader feel positive so that they stay longer on the page and click stuff (eventually to buy something they don't need). When I went online in the nineties I felt like I was entering a world of knowledge with smart people giving free advice, but expecting respect and hard work in return. When I go online now, I feel mostly shouted at and talked down to by people that seem of exceptionally low intelligence. This is what capitalism has brought us, and I don't like it one bit. There are small bubbles where the old vibe still lives, but you really have to know where to find them, because Google or Facebook aren't going to take you there if they can help it. This is the quality of content that the current LLMs were trained on, and as you know: garbage in, garbage out.

Don't use AI to generate things you want to maintain later

This goes especially for code, but also for articles like this. If you spend some time writing it yourself, it becomes much easier to change it later. If you let AI generate it, you will not understand how it works and you have created a trap for your future self.

Embrace AI for enhancing human creativity and productivity

It's crucial to remember that AI, at its core, is a tool designed to enhance human capabilities, not replace them. The most effective use of AI is in partnership with human creativity and intelligence. For instance, AI can significantly speed up the research process by analysing vast amounts of data in seconds, which would take humans much longer to process. This doesn't mean AI is inventing new theories or making groundbreaking discoveries on its own; rather, it's enabling humans to reach those insights more efficiently.

Guidance and collaboration are key

AI's success hinges on the guidance and collaboration it receives from humans. When used wisely, it can increase productivity tenfold. However, this increased productivity is a result of human direction and intelligence, not the AI acting independently. This collaborative approach ensures that AI contributes meaningfully without straying into the realm of generating hollow content.

Non-linearity and Human Genius

History shows us that the most significant scientific discoveries and inventions often come from nonlinear thinking—moments of genius that defy conventional wisdom. While AI operates within the boundaries of existing knowledge and patterns, human creativity can leap beyond these confines to pioneer new scientific realms. AI's role, then, is to support these human endeavours, making the journey towards innovation faster and more efficient.

A path forward

Incorporating a balanced perspective on AI, recognizing its limitations and its potential to augment human efforts, can pave the way for more positive discussions about its role in society. By focusing on AI as a tool for enhancement rather than replacement, we can explore innovative ways to harness its power while addressing legitimate concerns. This approach not only demystifies AI but also opens up avenues for future articles that explore the myriad ways AI can be used to improve performance and solve complex problems.

In conclusion

The journey with AI is just beginning. By understanding its capabilities and limitations, we can navigate the risks while seizing the opportunities it presents. The future of AI is not about fearing a loss of control but about how we can guide it to enrich the human experience, foster creativity, and accelerate progress. This optimistic outlook does not ignore the challenges but instead views them as hurdles we can overcome together, leveraging AI as a powerful ally in our quest for knowledge and innovation.

LangChain
March 4, 2024
LangChain: Building Smarter Language Model Applications

LangChain, an open-source Python library, enables Large Language Models to tap into diverse knowledge sources and interact with external tools. Featuring chains, agents, and prompt templates, it streamlines advanced AI application development, signif...

by Rahul Agrawal

City of the Future
March 26, 2024
Harnessing AI for SEO Supremacy

Explore how AI and LLMs revolutionise SEO, enhancing keyword research, content creation, and UX. This guide offers insights and examples for leveraging AI while highlighting the indispensable human touch.

by Pawel Paplinski