GovWebworks AI Lab

March 2021 AI Newsletter

Our AI Lab's roundup of the latest articles and news relating to artificial intelligence and online services in the public sector
GWW Staff

GWW Opinion

GWW Staff

March 9, 2021

Welcome to the March AI Newsletter from the GovWebworks AI Lab. The following articles touch on important topics in the industry, including:

  1. Custom virtual assistants offered by Amazon
  2. Using AI pattern recognition for hiring 
  3. Why AI struggles to understand sentences 
  4. Automating data collection with machine vision
  5. How bias in datasets can lead to ineffective AI

We hope you find this information as interesting as we do. If so, feel free to share with your colleagues and encourage them to sign up for the AI Newsletter and join the conversation! Or email us at ai@govwebworks.com to talk about using AI to optimize your organization’s digital goals.


#1: Amazon Announces White Label Alexa Integrations

Key takeaway: Brands and other large organizations can build their own custom virtual assistants without having to start from scratch

Reviewed by Tom Lovering

Amazon is opening up much of the AI toolset they used to create Alexa to third-parties so they can make their own virtual assistants. Amazon engineers will assist companies in developing their own voice and wake words to create a truly unique assistant. The resulting product can combine existing capabilities (weather, timers, alarms, etc.) with industry specific capabilities. Suggested use cases include in-car assistants, appliance support, and location-specific assistants.

Offering up this toolset potentially shifts the commercial approach to leveraging AI assistants and could lead to a proliferation of niche assistants that are integrated in to many of the products and services we buy. At the same time, this potentially gives Amazon access to a whole new world of consumer data passing through their servers.

Amazon enables companies to access Alexa’s advanced AI to build their own intelligent assistants

 


#2: Who Gets Hired? AI Decides

Key takeaway: AI can use simple games to gather data and help screen job applicants with pattern recognition

Reviewed by Ravi Jackson

AI is increasingly being used to help screen out unqualified applicants and to find the best personality fits for many companies. While AI often looks for certain words or patterns, newer technologies which include simple online games can help to assess a person’s skill, risk tolerance, and personality. These tools quickly prequalify those individuals that should move on to the next round of interviews, which would usually mean meeting with an actual person.

However, a word of caution; this technology isn’t perfect. Because AI uses past data to enable future predictions, it is possible that bias may occur, or decisions are encouraged based upon the majority fit (based on the data sets), rather than a person’s uniqueness. It’s important to audit the selection process to determine if there is unintended bias.

For the public sector this could have several implications:

  1. Governments may choose to adopt similar technology as part of their initial screening process for employees.
  2. Similar tests may be used to help screen applicants for certain types of services.
  3. AI can be used to gamify routine processes like hiring.

The computers rejecting your job application

 


#3: AI Still Has More to Learn

Key takeaway: AI can understand words really well, but whole sentences, not so much

Reviewed by Ravi Jackson

The perception of AI systems as being generally intelligent is an overestimation. It may be more accurate to think of current AI as being very good at operating within narrow tasks, given generally friendly (non-adversarial) data. Many AI tools appear to understand language and may even score higher than humans on a common set of comprehension tasks, including identifying when a sentence has positive or negative sentiments. However, these same tools don’t notice if the words in a sentence are jumbled up, or if the newly ordered words changed the meaning of the sentence. So although AI can recognize individual words, these tools are not yet particularly good at understanding language and complex grammar.

This may not matter if the AI is being used to help fix a typo, but the way in which AI reads and interprets words, and word order does matter if you’re using AI to determine the meaning of a sentence. This matters to government agencies, particularly when considering greater use of virtual assistants, automated applications, and chatbots. In an increasingly virtual and contactless world, agencies will need to consider how their AI tools react to different wording and sentence structure.

This is why the majority of chatbot/assistant platforms currently focus on identifying the user “intent” rather than truly trying to understand the meaning of what users are saying. You can think of a chatbot intent as something the user wants to accomplish.

Researchers are working on a variety of ways to overcome some of these language understanding issues. One way is by forcing a model to focus on word order. Ensuring that it recognizes word order means that future AI and natural language processing will be better positioned to more accurately interpret whole sentences and their meaning, and not just the individual words, matched patterns, or sentence similarity.

Jumbled-up sentences show that AIs still don’t really understand language

 


#4: Using Machine Vision “In the Field”

Key takeaway: Opportunity exists to leverage machine vision to capture data from legacy and non-Internet connected devices

Reviewed by Adam Kempler

Some common uses of machine vision that are always making the news (usually not for good reasons) include license plate readers and facial recognition. However, there are many other useful applications for this branch of AI, from document parsing to meter reading. This article is a good example of using machine vision “in the field”, and shows an opportunity to leverage low cost devices and open-source technologies, such as TensorFlow, to assist humans in capturing information and turning it into structured data. Tools like this can be quickly developed and the data collected and fed into new or existing data capture and processing systems.

If you can think of something that currently takes a human to look at and then record the data they are seeing, it can likely be automated with machine vision, which can free up resources, and reduce costs.

How screen scraping and TinyML can turn any dial into an API

 


#5: An Introduction to Bias In AI

Key takeaway: Bias in your datasets can lead to brittle AI solutions that don’t perform well

Reviewed by Adam Kempler

This is a nice, brief introduction to dataset bias, a critical topic which can impact any government organization implementing AI solutions. Bias can creep in whether you are using your own training data or using pre-trained models. Recently I was working on a module for a CMS (content management system) that automatically recommends alt tags for uploaded images to increase SEO and findability. But even something seemingly as safe as this can introduce bias. Consider one of the examples from the linked article where machine vision tools provided varying levels of performance for facial recognition because of skin tone.

Machine vision/image recognition is just one area that bias may occur. Making predictions based on data can often contain unintended bias when the data isn’t a good real-world representation. For example, if you are taking data from a few counties, it might not accurately represent the entire state. (This is known as selection bias).

The potential negative effects to audiences by bias are generally understood, and many cases have been publicly documented and discussed in recent years, such as inadequate (and even offensive) image labeling, unfairness in qualifying individuals for services, and bias in job hiring.

But on an implementation level, it’s important to understand another key effect of bias in your solutions; it makes them less effective. As the article states, “Dataset Bias in the training data causes poor performance and poor generalization to future test data.” Poor generalization means that your solution is brittle and can’t adapt well. As a result your AI product will perform poorly in real world usage. This is why it’s important to have some form of bias audit/review and moderation in place when implementing AI solutions. It should be an integrated part of your development cycle.

An Introduction to Bias In AI

 


Learn more

  • Sign up for the AI Newsletter for a roundup of the latest AI-related articles and news delivered to your inbox
  • Find out more about using AI to optimize your organization’s digital goals

Was this post helpful?