The Evolution of AI Tools: From Chatbots to Agents to Agentic AI
Fueled by investments surpassing a quarter trillion dollars (Forbes, Nov. 14, 2024 (opens in a new window) ), AI technology has evolved rapidly and become an integral part of most people’s daily lives.
At the fingertips of both individuals and industry, there are highly sophisticated tools capable of understanding language, generating creative content and making complex autonomous decisions. Various degrees of AI technology have become embedded in almost every industry, including agriculture, finance, healthcare and manufacturing. AI tools promise to revolutionize productivity and expand the boundaries of human potential.
While the most commonly used tools are chatbots like Open AI’s ChatGPT, there are many different AI systems being deployed that use different frameworks, models and approaches. In industry, AI agents, working in concert with sensors, are being used to assess environments, make decisions and take specific actions, such as predicting equipment failures before they occur, detecting defects in products, or alerting financial institutions to unusual transaction patterns that indicate fraud or other criminal activity.
To get a basic understanding of some of the AI tools that are transforming the way we work, let’s look at some of their essential differences and how they are being deployed in industry.
AI chatbots vs. agents: Content creation vs. decision making
A chatbot is commonly defined as a computer program that simulates human conversation to solve customer queries. Chatbots primarily process and generate text or speech to simulate dialogue and answer questions. They are usually optimized for natural language understanding and generation in conversational contexts. When doing business in a digital, globally connected world, a chatbot on a website allows visitors to interact with a company 24/7 to get the help and answers they need.
AI agents, on the other hand, are described as a system or program that perceives its environment, makes decisions and takes actions to achieve specific goals. These systems can adapt their responsiveness based on feedback from their environment — typically from data-gathering sensors — which allows them to learn and improve over time. According to a post on the GitHub blog (opens in a new window) , “… agents hold the promise of pursuing complex goals with minimal direct oversight — and that means removing toil and mundane linear tasks while allowing us to focus on higher-level thinking.”
AI agents in healthcare
Healthcare organizations are using a specialized type of AI agent called agentic AI. In what is being called a dramatic step forward, such AI agents — not shackled by rules-based limitations — have the ability to act independently and can perform more tasks on behalf of an individual or organization.
A whitepaper posted on the Amitech Solutions website (opens in a new window) states, “Agentic AI represents a significant leap in the evolution of Artificial Intelligence. Unlike Traditional AI that executes pre-defined commands or Generative AI that can only create within set boundaries, Agentic AI demonstrates autonomy and learning while making goal-driven decisions independently — within the confines of the process it is asked to execute. It prioritizes outcomes over programmed rules, making it a flexible and powerful tool for solving real-world challenges.”
The widespread adoption of AI in healthcare
The adoption and application of AI technology in healthcare is growing rapidly and is being integrated into areas as widespread as early disease detection, drug research and discovery, robotic surgery, and remote patient monitoring. According to research presented on the National Library of Medicine (opens in a new window) website, “AI-powered tools in primary care can assist in early screening for conditions such as diabetes, hypertension, and mental health disorders, enabling timely management and reducing the progression of these diseases. For chronic disease management, AI systems can provide personalized recommendations based on real-time patient data, helping healthcare providers create tailored lifestyle and treatment plans.”
Agentic AI systems and the human touch
While agentic AI systems can provide faster diagnostics, personalized treatments, proactive monitoring and operational efficiencies, it can also potentially improve the human touch in medicine. An article by the Harvard Business Review (opens in a new window) speaks to the benefits of agentic AI and the “affable” agents developed by the California firm Hippocratic AI: “Their ability to adapt to different settings, interpret human emotions, and show empathy makes agentic AI systems ideal for non-routine, soft skills work in areas such as healthcare and caregiving. Hippocratic AI, an agentic AI healthcare company based in California, has created a phalanx of AI agents tailored to different areas of healthcare and social support. The team counts among its ranks Sarah, an AI agent who ‘radiates warmth and understanding’ while providing help with assisted living. Sarah can ask patients about their day, organize menus and transport, and regularly remind patients to take their medication. Judy, another AI-powered agent, helps patients with pre-operative procedures, for example by reminding patients about arrival time and locations, or advising on pre-op fasting or stopping medications.”
Cautions related to use of AI in healthcare
While AI holds great promise in healthcare, challenges around privacy, bias, transparency, ethics, regulation and human oversight must be carefully addressed. To follow is a list of issues and concerns raised by Laura M. Cascella, MA, CPHRM, in Artificial Intelligence in Healthcare: Challenges and Risks (opens in a new window) .
- Biased data and functional issues. One of the major red flags associated with AI is the potential for bias. Bias can occur for various reasons; for example, the data used to train AI applications — as well as the rules used to build algorithms — might be biased. Additionally, bias might occur because of a variance in the training data or environment and how the AI program or tool is applied in real life.
- Black-box reasoning. Many of today’s cutting-edge AI technologies — particularly machine learning systems that offer great promise for transforming healthcare — have opaque algorithms, making it difficult or impossible to determine how they produce results. This unknown functioning is referred to as “black-box reasoning” or “black-box decision-making,” and it presents concerns for patient safety, clinical judgment and liability.
- Automation bias. Humans, by nature, are vulnerable to cognitive errors resulting from knowledge deficits, faulty heuristics, and affective influences. In healthcare, these cognitive missteps are known to contribute to medical errors and patient harm, particularly in relation to incorrect or delayed diagnoses. When AI is incorporated into clinical practice, healthcare providers might be susceptible to a type of cognitive error known as “automation bias.”
- Data privacy and security. With the digitalization of health information, healthcare organizations and providers have faced growing challenges with securing increasing amounts of sensitive and confidential information while adhering to federal and state privacy and security regulations. AI presents similar challenges because of its dichotomous nature — it requires massive quantities and diverse types of digital data but is vulnerable to privacy and security issues.
- Patient expectations. AI offers vast potential for improving patient outcomes through advances in population health management, risk identification and stratification, diagnosis, and treatment. Yet even with this promise, questions arise about how patients will interact with and react to these new technologies and how these advances will change the provider-patient relationship.
- Training and education. The emergence of AI, its anticipated expansion into healthcare, and its sheer scope point to significant training and educational needs for medical students and practicing healthcare providers. These needs go far beyond developing technical skills with AI programs and systems; rather, they call for a shift in the paradigm of medical learning.
Ensuring responsible development and deployment in healthcare is imperative for AI’s successful integration into all aspects of the industry. Addressing the aforementioned concerns is critical to building AI systems that are fair, reliable and aligned with human values as well as with organizational business goals.
Is your business looking to upskill and educate its workforce?
The Washington State Community and Technical Colleges offer a wide range of technology-related classes and certificate programs that can quickly give your employees the new skills needed to excel in the workplace. For more information about these programs, please contact:
Brianna Rockenstire
Director
Center of Excellence for Information & Computing Technology
Email: brianna.rockenstire@bellevuecollege.edu
Tel: 425-564-4229