There is no denying that we have shrugged off the curse of the past winters of Artificial Intelligence (AI) and are now on the cusp of a new and promising era of AI. Amplified by the recent surge in Generative AI(a type of AI that creates new content based on patterns it has learned using data ), we are now witnessing a constant inundation of news showcasing the remarkable disruptions caused by Artificial Intelligence. From cutting-edge advancements to ground-breaking applications, AI’s relentless momentum is reshaping the world we know. However, alongside the excitement, loom fears of job losses, ethical dilemmas and even the idea of AI taking over the world (and so many more). Let’s navigate this duality of AI — balancing its potential to drive human evolution to the next stage while addressing the genuine concerns it raises.
In just a few years, AI and generative AI technology have achieved incredible feats once considered impossible. ChatGPT surpassed the bar exam for human lawyers, now reaching top 10 percentile scores with GPT-4. Meanwhile, GitHub’s Copilot functionality significantly accelerates development and boosts functional accuracy with its code-writing assistance. Moreover, AI’s potential in healthcare is evident through disease detection, with health professionals using AI to identify dangerous illnesses like prostate or breast cancer whilst eliminating the need for invasive and costly procedures and possibly even removing human error from such vital diagnoses. Companies like Babylon Health exemplify how generative AI is revolutionizing healthcare, offering scalable and AI-based diagnosis and healthcare services. These examples highlight AI’s remarkable power and the vast possibilities it holds, with countless more promising applications that can be readily found through a quick Google search.
Amidst the promises and abundant opportunities, a pertinent question arises: What are the potential negative impacts of AI? Among the concerns, job displacement remains a significant worry for many. Other examples such as its effects on learning and human interaction should also be pondered. Ethical concerns further come to the forefront, raising questions about how data is used, the bias in algorithms, and the ethical use of AI in decision-making processes. Are these fears justified, or do they echo the sentiment of the modern-day Luddite? Let’s look further and explore its potential drawbacks alongside its advancements.
I recently discovered that due to the advancement of generative AI we will be getting a new and “last” Beatles song with AI recreating the voice of John Lennon. If that is the case then will there ever be a “last” Beatles song? As AI technology progresses, it holds the potential to carry forward the Beatles’ legacy long after they and we are gone, generating new lyrics and melodies. The technologist in me is excited to see the possibilities and the music lover in me is looking forward to a “new” Beatles song. As we contemplate the future of creativity in the age of AI, a pressing question emerges: Are we heading down a path of “outsourcing” creativity? Alternatively, could the inventive application of this technology open doors to new realms of creativity? Take for example AI being used to bring Elvis back to sing the “timeless classic” — Baby Got Back by Sir Mix-a-Lot The AI generated rendition sounds surprisingly good making it highly plausible to sway the uninitiated into believing it’s a genuine Elvis song.
Today, GPT-based tools serve as invaluable aids in various work tasks. Software engineers leverage them to boost productivity by writing code and documentation, while others find assistance in transcribing meeting notes and composing professional emails. This allows the engineer to focus on more critical tasks and creative tasks and not spend time on the mundane. The use of AI in the workplace can evoke contrasting perspectives: some view it as potentially unethical to outsource tasks to technology, while others embrace it as a creative tool for everyday life.
This leads us to a crucial question and a pertinent fear: Will AI render certain jobs obsolete? It’s a valid and pressing concern that warrants analysis. Drawing lessons from history, we’ve witnessed technology introductions causing workforce disruptions before. However, in many cases, jobs were not obliterated but rather redefined and reallocated. The advent of computers, for instance, fostered demand for computer-literate professionals, thus creating new opportunities. As for AI’s impact, the answer is not as simple.
The apprehension about AI-related job losses is most pronounced in roles with repetitive tasks and limited decision-making, like data entry and customer service positions. As AI adoption increases, impacted employees will need to up-skill or explore new fields and roles to adapt to the changing job landscape. The potential for AI to take over jobs is also evident in the entertainment industry. The current Screen Actors Guild-American Federation of Television and Radio Artists strike, is seeking protection for actors against AI-related job displacement. However, it is clear that more AI adoption across various roles and industries can present potential cost reductions and a reduction in human errors and can also brings forth opportunities for new jobs in areas such as data analysis and machine learning or even in creative fields such as movies (e.g. Val Kilmer’s voice in Top Gun:Maverick was only made possible due to AI). This duality raises a crucial question: Will AI create more jobs, enhancing our capabilities, or will it displace jobs, leading to significant human repercussions? The difficulty in providing a simple answer comes from the fact that never before has a technology had such far-reaching implications or is able to be scaled so quickly and easily. We also have no way of knowing how much more “intelligent” the next generation of AI technology will get.
Generative AI is definitely a double-edged sword when it comes to learning and the way we consume information. Access to vast knowledge at our fingertips can be advantageous and can save us copious amounts of time. As someone who learned from Encyclopaedia Britannica, I recall the frustration of missing volumes at the local library on my quest to find information while researching the JFK assassination and then spending hours if not days to get the information I needed. This adversity, however, taught resilience, a valuable trait potentially impacted by our reliance upon AI to solve our problems for us. As ChatGPT and related technologies integrate further into learning, we must confront this potential impact. When faced with complex problems, will we seek creative solutions, explore additional resources, or opt out simply because it’s just too hard; potentially leading to a generation of people lacking resilience?
As AI and related technologies get better, they will take on more advisory roles, or become trusted companions rather than mere information tools. They will enable us to get answers quickly and easily and even make decisions for us. However, we must remain cautious. If these models were trained on biased or incomplete data, the answers they provide may be inaccurate or skewed, leading to potential biases or hallucinations (making up something in an effort to please the user that is largely false). The decisions it makes might also be steeped in bias, potentially leading to unethical, divisive, or even insulting and unwanted outcomes. We must be vigilant and balance the readily available AI based answers with our own ability to critically analyse and synthesise the information; ensuring that we don’t blindly accept the returned answer as truth. We must also strive to ensure that we have diverse and comprehensive data for training the AI so that we can avoid these pitfalls and meet our needs effectively.
Ethical considerations regarding AI’s use are undoubtedly valid, but like with any ethical issue, they can be intricate and subjective. For instance, using ChatGPT to compose an entire novel might be seen as unethical by many. However, consider an author facing writer’s block while working on her creative masterpiece. Seeking inspiration from ChatGPT to describe a scene or character, and subsequently finding the creativity to finish her work, could be akin to seeking advice from a trusted friend, advisor, colleague, or peer. This raises questions about the nature of AI’s role in creative endeavours and the complex intersection of ethics and technology and the where the balance is.
The above may be a simple example so let’s turn to one with much more far reaching and impactful concerns. If AI can be used for the advancement of the good what about the nefarious use of AI to be used as some form of weapon. I very recently watched Oppenheimer (amazing movie and I thoroughly recommend it) and saw how theory can be put into practice in the most destructive ways. However dramatised, I watched the portrayal of scientists first marvel and then lament their own discoveries. So, what if we are on the cusp of something similar now (I can hear the virtual scoffs)? What if, we are currently at the marvel stage of AI and the ability to scale this technology allows us to create easily digestible tools like PowerPoint slide creators, or pictures of people that don’t exist? What if these simple tools are desensitising our awareness of its potential downsides. As we traverse this duality, it becomes critical to strike a balance — to harness the opportunity for discovery while cautiously navigating its powerful potential for harm. Will we, in years to come (and as per the nuclear experiment), look back and lament our discovery and say was there more that could have been done?
As AI gets smarter and better it is evident that it will “learn” to make decisions and help alleviate some of the human burden and may even help reduce stresses involved with decision making. This however, leads to a fear that is a basis of many past and contemporary works of fiction: Will AI take over the world? At present, AI is not at the stage of making the smallest self-aware decisions let alone taking over the world. While I may not fully subscribe to the notion of self-aware reasoning machines that will one day judge humanity (and possibly find them lacking), I acknowledge that such a possibility cannot be entirely ruled out. Nevertheless, I believe that this scenario is still distant, and the realization of such an event would likely require another ground-breaking discovery (possibly in the realm of quantum computing or other uncharted territories). For me, Skynet or Ultron is still closer to science fiction than science fact.
Advancements like AI, that have such a dual nature and with such far-reaching and life altering impacts, necessitate boundaries and regulations. Yet, determining these guidelines raises vital questions about decision-makers such as who makes these policies and guidelines? As humans surely, they are also flawed and biased? We must also ensure that the making and enforcement of these guidelines still provides a simple way to explore AI more ethically and for the good of all. Governing bodies such as the United Nations as well as great thinkers and entrepeneurs are all now working on understanding the unique challenges AI presents to humanity. They are also looking at how the risks and fears can be mitigated through policy and guidelines . The balance and hope lies in a thoughtful and concerned undertaking, aiming to regulate AI’s potential for harm and prevent malicious use whilst still allowing the ability to discover new and exciting possibilities. Amidst the pursuit of these guidelines and policies, one thing remains clear: ethics will always be an inherently “human” concern.
AI, with its undeniable power, stands ready to shape our lives in ways we have yet to fathom, even surpassing the transformative impact of the internet. The duality of this technology leads to a simple question: Will the impact be for our betterment, or will it lead to our downfall? As we have no crystal ball and our future remains shrouded in mystery, we find ourselves at a defining moment — will we embrace the uncharted territory of AI’s potential, or shy away from the unknown? In our quest to navigate this new landscape, we must strive to maintain a fine balance, a delicate equilibrium between harnessing AI’s power for human progress and safeguarding against potential dangers. This balance, while challenging, will define our coexistence with AI. And as this unfolding chapter beckons, the most thought-provoking question of all lingers — what legacy will we leave behind as AI and humanity converge on this extraordinary journey?