POV: Why ethics, education, and empathy are key to our AI future
By Raj Verma
In recent days, a group of nearly 10,000 scientists, technologists, entrepreneurs, and academics have signed an open letter calling for a pause on training AI systems more powerful than GPT-4. In a short span of time, we have seen huge developments within the AI space, both good and bad. As a computer science engineer, I know firsthand that the innovations we are seeing in AI have been coming for some time. While some developments are quite concerning, if used responsibly, these technologies can facilitate great social advancement, from the ability to assist radiologists with medical screenings to helping visually impaired people complete everyday tasks. The possibilities are endless, but we will only maximize the good if we do so responsibly.
I believe there are three key areas that we must take into consideration while thinking about AI: ethics, education, and empathy. These “3 E’s” will be paramount to ensuring that AI helps drive social good and a better world for us all.
I am a fervent proponent of the ethical use of AI. While we cannot—and I don’t believe we should—resist this technology, we must also be aware of its pitfalls and act accordingly to mitigate them. One area where we need to pay attention to the ethics of AI is with data and data privacy. For example, as I just mentioned, AI can be used to develop lifesaving medical equipment. However, we must also consider how our medical data is being stored and with whom it is being shared. Although the ability to detect disease from our coughing patterns could be massively powerful, there is also the risk that this data could be used against us.
A close parallel is what we’re seeing on social media websites using algorithms to keep people addicted to their platforms. Studies show that these algorithms can be linked to high rates of sadness and suicidal thoughts, particularly in teen girls, damaging our society writ-large. I firmly believe that we must hold tech leaders responsible to ensure that their technology, including AI’s boom, is being developed responsibly and used in an ethical manner.
My second “E” is education. Recently, my teenage son and I played around with ChatGPT’s essay generating feature, just to see what the fuss is about. Despite the many uses of generative artificial intelligence apps, the news has been filled with concerns over their potential to help students cheat on their homework. A recent survey found that 51% of students themselves believe that using AI tools to complete assignments is a form of cheating; yet, 22% of students also report having used AI tools to help them complete exams and other projects. In our home, instead of immediately banning ChatGPT, my son and I thought it would be fun to explore the app together. We found that while the app’s essay was better than expected, it was still filled with flaws and was in no way an “A” paper. Most importantly, the essay wasn’t written in my son’s voice. This led to an important discussion on how no machine can fully capture our unique views, experiences, and brilliance.
This conversation was important because I am confident that these apps will continue to improve in sophistication with time. The solution is to teach young people—and ourselves—how to properly engage with this technology. Generative AI apps still have major challenges. For example, their accuracy is limited, and if they are fed biased data inputs they will produce biased outputs. However, there are immediate benefits to using these apps in education—from quick access to information to being an additional resource for teachers, parents, and students. Furthermore, they can be helpful in everything from creating lesson plans to checking homework. In many ways, there is a parallel to the dawn of the internet. Being able to quickly google a statistic or fact is much faster than pouring through books. Search engines enhanced our ability to research and work, it did not replace it. In this vein, we must think of apps like ChatGPT as tools to help our ability to think critically and not necessary evils.
Finally, I believe we must see AI as a way to enhance human skill, not as an attempt to replace humanity. Every era of automation has been rife with fears about machines replacing people’s jobs. This is a trend that dates back hundreds of years, but seems especially relevant now. Some jobs will be replaced as new technologies come on stream. However, new jobs will be created in their stead to complement these new advancements. Some jobs will never be able to be done by a machine—particularly those which require empathy. Many roles in fields such as healthcare, childcare, old age care, and more will always require a human touch. As machines become more intertwined in human life, I believe these jobs will not only become more important, but also higher paying.
AI is trendy now, which means that it is the source of much excitement and trepidation. I believe that it is much more important to look at this technology, and all its implications, soberly and analytically. If we use the “3 E’s” as a lens, we can guide our use of AI so that it benefits humanity in unimaginable ways, giving us all hope for a brighter tomorrow.
Raj Verma is the CEO of SingleStore.
(14)