AI generated image of a person standing in a computer generated landscape

June 2024 AI News

Our AI Lab's roundup of the latest articles and news relating to artificial intelligence and online services in the public sector
GWW Staff

GWW Opinion

GWW Staff

June 4, 2024

Welcome to the June 2024 AI Newsletter from the GovWebworks AI Lab. The following articles touch on important topics relating to public sector online services:

  1. Multimodal assistants
  2. AI tutors
  3. AI coding and debugging
  4. Social media compared to AI
  5. European Union AI Act

Feel free to share with your colleagues and encourage them to sign up for the AI Newsletter and join the conversation. Or schedule a free consultation to discuss how AI can optimize your organization’s digital goals.


#1: Multimodal assistants have arrived

Key takeaway: The latest Large Multimodal Models (LMMs) represent the next leap in conversational assistants.

Reviewed by GWW Staff

OpenAI and Google both announced Large Multimodal Models (LMMs) – GPT-4o/Omni and Gemini, respectively – that in addition to understanding text, combine an understanding of images, video, and audio. We are evaluating safe ways to use these LMMs for government applications as we enter this next chapter in the race for artificial general intelligence (AGI).

Midjourney image of large language models for government

Large Language Model Applications for Government

AI Lab update on the benefits, risks, and emergent guidelines for LLMs in the public sector

Read Article

LMMs understand speech, emotion, and human interaction and can verbally respond to questions using the video and audio on a smartphone. In contrast to LLMs (Large Language Models, such as ChatGPT, which debuted in November 2022 and are specialized in processing and generating textual data), LMMs integrate and process multiple data formats simultaneously, making them versatile in managing complex, multimodal information. Like Hal 9000 in 2001: A Space Odyssey, or Samantha in the movie Her, LMMs can “see” the world and “talk” with the people in it.

Other major players such as Meta, Amazon, Microsoft, and Anthropic are also working on their own versions of Large Multimodal Models. More LMM features and models should be rolling out to the general public within the next few months.

Footnote: Much discussed since OpenAI’s LMM release is whether or not the company copied Her actress Scarlet Johansson’s voice and mannerisms despite her refusal to be involved. The rights around intellectual property used by AI is an important topic to be considered when creating conversational assistants and one that underscores the need for trusted technical consultants.

Resources:


#2: Meet your AI tutor

Key takeaway: AI educational tools bring inherent benefits and risks.

Reviewed by GWW Staff

The rise of AI-powered education tools has been challenging traditional tutoring services by offering less expensive personalized learning for students. New AI educational apps are integrating advanced technologies like ChatGPT4o to assist students across various subjects in the effort to make education more accessible and personalized.

Like all new technologies, early adopters are still learning how to best leverage these tools and how to combat things like bias, propaganda, and potential misuse. Pilot programs, and an understanding of the potential pitfalls is important for a successful rollout.

See the MOOSE Online Education Platform for more about the online educational tools GovWebworks built for Maine Department of Education.

Resources:


#3: AI coding and debugging

Key takeaway: Tools for automated coding and debugging are providing support to developers.

Reviewed by GWW Staff

AI-powered coding and debugging is advancing every day, with many of the newest tools serving as “virtual developers”. We’ve been tracking the various options and weighing the various benefits with issues in adoption and reskilling. The following are a few of the tools we’ve been watching:

  • Devin AI, developed by Cognition, is an autonomous AI engineer capable of independent problem-solving and end-to-end software development. It leverages machine learning for continuous improvement and can do most development tasks, including writing code and interacting with APIs and external libraries.
  • Microsoft AutoDev automates complex software engineering tasks as well as project management and Docker containerization expertise. Beyond code generation, it can perform various actions within the Integrated Development Environment (IDE) such as building a project, performing automated tests, and version control through Git.
  • Nustom is working with OpenAI and Index Ventures to create no-code options that enable non-technical individuals to collaborate with AI to create software, but human engineers are recommended to ensure software quality until AI agents are more reliable.
  • GitHub’s code-scanning autofix feature aims to streamline the process of debugging and addressing security vulnerabilities. Along with GitHub’s Copilot and CodeQL engine, it claims to autonomously rectify over two-thirds of detected vulnerabilities, often without manual code alterations. GitHub Advanced Security (GHAS) customers can use this tool to streamline security workflows and integration with OpenAI’s GPT-4 can generate fixes and explanations.

See our post on Fine Tuning Our Technology Radar for more details on other technologies we’ve been reviewing.

Resources:


#4: Similarities between social media and AI

Key takeaway: Harvard technologist argues for AI regulation based on social media impacts.

Reviewed by GWW Staff

Bruce Schneier, a public-interest technologist at Harvard’s Kennedy School, discusses the parallels between the rise of social media and its impact on society with the growing influence of AI. He identifies five fundamental attributes of social media—1) advertising, 2) surveillance, 3) virality, 4) lock-in, and 5) monopolization—that have both positive and negative implications for society. He argues that AI can further these issues and exacerbate existing societal challenges if left unregulated. Schneier notes the following parallels:

  1. AI-powered ads may become more manipulative and invasive, potentially embedded within AI chatbots, leading to increased user vulnerability.
  2. Surveillance in AI can lead to extensive data collection and manipulation by AI-powered platforms, similar to social media.
  3. Due to the viral nature of social media content, AI-generated misinformation can spread rapidly and influence public opinion.
  4. Lock-in refers to the tendency for users to become dependent on AI platforms, and face challenges of transitioning between services due to data portability issues.
  5. Regarding monopolization in the AI industry, many of the dominant tech giants in social media are increasing their reach and dominance through AI.

Schneier argues that regulation is needed to mitigate the negative impact of AI on society, including restrictions on certain AI applications, transparency rules, and antitrust measures. He calls for proactive measures to steer AI development in a positive direction. We’re evaluating these perspectives and how they apply to the services we provide.

Resources:


#5: EU AI Act sets worldwide rules

Key takeaway: Details from European Union regulations signed in May 2024.

Reviewed by GWW Staff

The European Union’s AI Act, greenlighted this May, represents one of the first major attempts at regulating AI technologies, but faces challenges in implementation and enforcement, as well as in addressing complex issues such as algorithmic bias and misinformation.

Key features include:

  1. Bans on certain high-risk AI use cases: The Act prohibits AI applications that pose risks to fundamental rights, such as in healthcare, education, and policing. It also bans uses that exploit vulnerabilities or manipulate behavior, as well as the use of real-time facial recognition in public places. However, law enforcement agencies will still have some leeway for certain purposes.
  2. Increased transparency in AI interactions: Tech companies will be required to label deepfakes and AI-generated content and inform users when they are interacting with AI systems. This aims to address concerns about misinformation and improve accountability, although challenges remain in reliably detecting AI-generated content.
  3. Establishment of a complaint mechanism: The AI Act will create a European AI Office to oversee compliance and handle complaints from citizens who believe they have been harmed by AI systems. This initiative aims to empower individuals and enhance accountability in automated decision-making processes, though it will require increased AI literacy among the public.

Resources:


Learn more

  • Sign up for the AI Newsletter for a roundup of the latest AI-related articles and news delivered to your inbox
  • Find out more about using AI to optimize your organization’s digital goals

Was this post helpful?