The Promises and Perils of AI
News about artificial intelligence is continually swirling around us and its use is settling into our daily routines. ChatGPT, a large language from of AI, is now being used by the average person as commonly as email. Beyond the charming novelty of chatbots, AI has been busy transforming industries as diverse as healthcare and agriculture. At the same time, ominous warnings about AI’s unchecked proliferation and potential for misuse are causing some industry leaders to call for a pause citing ‘profound risks to society and humanity.’ These concerns were voiced by a high-profile list of signatories in an open letter (opens in a new window) published by the Future of Life Institute. To sift through the noise and the hype, we sat down with Bellevue College IT instructor Olav Opedal, Ph.D to discuss the perils, the promises, and the career paths of AI. We thank Dr. Opedal for his time and contributions to this article.
Q: Tell us about your background and industry experience.
Olav Opedal, Ph.D.: I have a bachelor’s degree in computer science, a master’s degree in clinical psychology. I got my Ph.D. in general psychology with emphasis on research.
I worked in information security at Microsoft for approximately 16 years. I also worked as a program owner there in data loss prevention where we were trying to implement artificial (AI) intelligence and machine learning (ML) with the current human curated rules that were already in place. In about 2014, I switched to full time data science where I worked with individuals from Microsoft Research. It was then when we started building out that AI/ML capability.
After working at Microsoft, I did a stint at T-Mobile. While there, they applied for a patent for an innovative solution for patch management in information security using AI/ML. Since I left Microsoft, I’ve continued to do information security and AI/ML in a more general way, whether it be for automation or for customer retention.
Q: What are the biggest trends in AI right now?
Opedal: AI is everywhere. Most people don’t realize how much it plays a role in their daily lives. For example, consider the case that holds my Apple AirPods. Through that little case, AI controls the charging of my earbuds, making sure that the longevity of the charge is as good as possible. So, even in such a small piece of equipment you now find AI/ML. What I don’t think people appreciate is that AI/ML has been around for a long time, but it has been in areas where most of us don’t think about it.
In radiotelephony, there were patents, I believe, issued back in the ’80s. Optical Character Recognition software has been in use for a long time. It’s used when you deposit a check at an ATM. It scans your check and presto! The money is credited to your account. A lot of that technology was invented quite a while ago. Digital recognition using neural networks has also been in use for a long time, too. Right?
The use of AI/ML been on a slow and steady increased trajectory for many years. What people today are likely familiar with are recommendation engines like those used by Netflix and Amazon. The reality is that people are in fact encountering AI/ML all day long in their lives. What has recently captured the public’s attention the generative AI technology that runs ChatGPT. (opens in a new window)
What is remarkable about ChatGPT is how well it can converse and provide answers. Granted, some of the answers it provides are not as in depth as I would’ve thought they would be. What is important to is that you need to know what you’re asking for, you need to understand enough of the domain you’re interested in, and you need to know what is correct and not correct coming out of ChatGPT. So, in other words, it’s a great starting point. It probably shouldn’t be the end point.
Here’s a quick example. I set up a new Wi-Fi system for my father in his house in Norway. During the installation, I forgot to set up his Google Assistant speaker that is connected to his TV. Since no one was available to help him right away with the setup, he decided to ask ChatGPT for directions. He got all the step-by-step directions from ChatGPT written out in Norwegian and and got it working.
So, I think ChatGPT has made people aware of how far AI has come. That said, when using ChatGPT, you need to know what you’re asking for, you need to understand enough of the domain you’re interested in, and you need to know what is correct and not correct coming out of ChatGPT. In other words, it’s a great starting point. It probably shouldn’t be the end point.
The concern I have with ChatGPT is that users will anthropomorphize it, that individuals will start to believe they have having meaningful conversation with a sentient being. There are chat bots being used as therapists.
Q: Do you have other concerns?
Opedal: There are many unresolved ethical issues that we as a society have yet to figure out. As more of these large language models are released, the more we are going to have to carefully think things through. The technology is proliferating rapidly. And it is not expensive to build.
Putin said, “whomever wins the AI race wins.” This means we are going to see more AI in weaponry. Technically, forms of AI have been used in missile guidance systems for decades.
So, it isn’t new. But I don’t think any of us understand what the implications are as the technology advances.
We have seen very clearly how information has been weaponized in the political realm. During the 2016 election, Putin had his minions distribute enormous amounts of disinformation and misinformation. The difference now is you can automate the distribution process. Propagandists no longer need to employ thousands of trolls. Now almost anyone can use a large language model to accomplish the task. The fringes on both sides of the political spectrum can now deploy this technology for their means.
We must not let the appealing novelty of large language models seduce us. We must keep our guard up. We used to think that social media was benign. Right?
Q: What are some of the innovative ways various industries are using AI?
Opedal: You wouldn’t believe how much AI/ML is being used in agriculture. In fact, you can think of John Deere as an AI company now. The company’s 8R model tractor not only drives autonomously, utilizing six pairs of stereo cameras and advanced artificial intelligence to navigate farmland, but it also can gather data about the soil as it moves through the fields.
Farmers are using drones to map the evaluation of crops and for deployment of pesticides.
Compared to the very expensive 8R tractor, a drone is much more affordable. A farmer could acquire a simple drone to take pictures and then use computer vision to analyze those images. A professional drone can cost between $7,500 to $35,000 range, which is not prohibitively expensive.
Q: Can you define computer vision technology?
Opedal: Computer vision is a field of AI that involves the processing of digital imagery including photographs, video, or Optical Character Recognition (OCR), to derive meaningful information. For example, we talked about AI enabled tractors, which have a video and a GPS feed. The GPS feed provides location information, and the video feed shows obstacles in the tractor’s path. Computer vison data analysis can provide farmers with useful information in many areas of agriculture such as in the detection and counting of pests, for fruit quality inspection, and for irrigation management.
Q: How AI is being used in cybersecurity?
Opedal: It has been used in cybersecurity for quite some time, starting in around 2015. You will see AI on endpoints—the devices that connect to and exchange information—within a computer network. The most obvious example is with SPAM detection. The large security and incident event monitoring systems use AI/ML, which produce tremendous amount of data. And when you have large amounts of data, you are going to have anomalies. This is where humans work in conjunction with the AI/ML to distinguish what is concerning and what’s not.
The issue in cybersecurity is every time we build some new technology for information security, it doesn’t take long before it becomes obsolete. This is why you must consider humans and their behaviors. Gartner published a paper on this issue that talked about how awareness and education is what’s really going to make a difference going forward. The reason is if you’re a defending a network, it is a given you use AI/ML, but nothing prevents the attacker from using AI/ML as well. Right?
Human behavior is well understood by attackers now. That’s why phishing is effective. Attackers use techniques from psychology to entice a person to click a link. And machine learning has been used to try to combat that by finding the likelihood of something passing through a network as being fraudulent. But at the end of the day, the attack is changed just enough that it doesn’t get detected the next time, and then it goes through. It’s a cat and mouse game. I don’t think that is going to change. I think attackers and defenders will continue to use technology to help in their pursuits.
Q: What are the AI skillsets that are most in demand?
Obviously, you have to have coding skills, and now it seems that Python has become the facto language for AI. SQL also continues to be very important, and surprisingly so. You have to be able to pull out the right data, clean the data, and group and aggregate it. SQL is just phenomenal for that.
You also need to have a foundation in statistics and math—calculus and linear algebra. You don’t need an advanced math degree, but you need to have enough statistical understanding, and you need to have some business acumen, an understanding of the needs of the end-user. According to Gartner, 90 percent of AI/ML projects fail, and that’s often because what is built is not useful for the person it’s intended for.
It is also important to understand is that AI development is a team sport. You have to have the soft skills to be able to collaborate in a team environment. A lone wolf AI/ML developer is not going to be very successful. You will need to work with other developers, DevOps engineers, and data engineers. At the end of the day, AI/ML is usually just a small part of a bigger product.
One question, two answers:
The flowing responses are to the question, “What is Generative AI?” One comes from a Google search that provides a definition from an article by U.S. News and one from ChatGPT. Which do you find more useful?
What Is Generative Ai? (From an article in U.S. News dated March 17, 2023)
Like other forms of artificial intelligence, generative AI learns how to take actions from past data. It creates brand new content – a text, an image, even computer code – based on that training, instead of simply categorizing or identifying data like other AI. The most famous generative AI application is ChatGPT, a chatbot that Microsoft-backed OpenAI released late last year. The AI powering it is known as a large language model because it takes in a text prompt and from that writes a human-like response.
GPT-4, a newer model that OpenAI announced this week, is “multimodal” because it can perceive not only text but images as well. OpenAI’s president demonstrated on Tuesday how it could take a photo of a hand-drawn mock-up for a website he wanted to build, and from that generate a real one.
What Is Generative Ai? (Generated by ChatGPT)
Generative AI refers to a type of artificial intelligence that is designed to generate or create new, original data or outputs that resemble those created by humans. This type of AI is typically used in creative applications, such as music composition, image and video generation, or natural language processing. Generative AI algorithms are trained on large datasets and use machine learning techniques like deep learning, reinforcement learning, and neural networks to autonomously create new and unique outcomes. While generative AI holds immense potential for innovation, it also raises concerns about ethics, creativity, and ownership of the generated outputs.
Dr. Olav Opedal Bio
Dr. Olav Opedal, psychologist and data scientist, currently owns and operates Opedal Consulting, LLC, a veteran owned consulting firm. He provides machine learning and AI consulting services. He holds a Bachelors in computer science, a master’s degree in clinical psychology and a Ph.D. in general psychology. Industry experience includes 20+ years at companies such as Microsoft, T-Mobile and PwC. Teaching experience includes Green River College and Bellevue College. Dr. Opedal has worked as a data scientist since 2015, predominantly in the cybersecurity area. Additionally, he does academic research in the psychology of computer professionals, information security specialists and hackers using both quantitative and qualitative methods. He also combines quantitative and qualitative methods when building models for cyber security purposes.
About the AI degree program at Bellevue College
Bellevue College’s Artificial Intelligence (AI) associate degree in applied science (opens in a new window) – The transfer degree will help prepare graduates to work in the field of artificial intelligence software development. The program will provide students with the knowledge and skills to design, implement, and analyze basic machine learning and embedded systems that run AI applications. The program also introduces students to the science of computer vision and the fundamentals of robotics control systems.