Jump directly to the content

ARTIFICIAL intelligence may find a niche in industries like policing and teaching - and in some ways it's already happening.

As businesses prioritize productivity and strive to cut costs, AI is seen as an easy way to streamline tasks and eliminate the reliance on human workers.

Artificial intelligence is already being used in industries like teaching and legal services - and its influence is only expected to grow in the near future
9
Artificial intelligence is already being used in industries like teaching and legal services - and its influence is only expected to grow in the near futureCredit: AFP

However, there are downsides like the amplification of biases and inaccurate information. Plus, what happens when AI begins to learn from itself?

Here are a few of the biggest scientific breakthroughs that are expected to catch on in the near future.

Self-teaching AI

Media has popularized the idea of AI systems as obedient machines reliant on direction from human beings.

However, developers have learned it is possible for these systems to go mad.

That's an acronym, MAD, meaning model autophagy disorder. It describes the process by which AI learns from its own output, producing increasingly nonsensical results.

The term "autophagy" comes from the Greek "self-devouring," aptly capturing the way a system trains itself on AI-synthesized content like a snake eating its own tail.

Researchers at Rice and Stanford University were among the first to discover that models decline in the quality and diversity of their responses without a constant stream of new, real data.

Complete autophagy occurs when a model is trained solely on its own output, but machines can also train on data published by other AI programs.

This introduces a problem as more and more AI-generated content floods the web. It is increasingly likely that such material is being scraped and used in training datasets for other models.

It is difficult to gauge how much internet data is generated by artificial intelligence, but that amount is growing quickly.

US Social Template VERTICAL - Mackenzie Tatanann - Microsoft VALL-E 2 is a text-to-speech generator that can replicate human speech with eerie precision

NewsGuard, a platform that rates the credibility of news sites, has been tracking "AI-enabled misinformation" online.

By the end of 2023, the group had identified 614 unreliable AI-generated news and information websites, dubbed “UAINS." As of this week, the number has swelled to 987.

The websites span a whopping 16 languages and have generic names to appear like legitimate news sites. Some contain political falsities, while others fabricate celebrity deaths and events.

It is expected that the scale of the issue will only increase in years to come as more and more content is generated by machines.

Models may begin to show signs of MAD more frequently as they feed on this content, amplifying lies and distorting facts.

Researchers have identified a phenomenon known as model autophagy disorder, aka MAD, where AI systems train themselves on synthetic data
9
Researchers have identified a phenomenon known as model autophagy disorder, aka MAD, where AI systems train themselves on synthetic dataCredit: Getty

The tipping point comes when people fail to distinguish between human output and that of machines, taking the misleading content as fact.

AI police

Proponents argue that AI could supplement certain areas of police work, from investigating crimes to answering 911 calls.

And some industry experts believe artificial intelligence will soon play a key role in crime analysis, reducing the strain on an industry facing widespread staff shortages.

A 2019 survey conducted by the Police Executive Research Forum revealed that 86% of police agencies reported an officer shortage.

In a survey two years later, the typical hiring rate fell by 5% across agencies of all sizes while retirement rates rose by nearly 50%.

For this reason, some insiders are eager to adopt AI tech. So where could it fit in?

Industry experts say AI could be used in police work to reduce the reliance on human officers and streamline tasks like data analysis
9
Industry experts say AI could be used in police work to reduce the reliance on human officers and streamline tasks like data analysisCredit: Getty

AI could highlight key information to be used in investigations, pulling details from social media logs or financial statements.

New technology could automate data collection and interpretation, leaving crime analysis teams with more time for complex tasks.

Similarly, the systems can be used for fraud detection, identifying patterns in documents that are indicative of illegal activities.

One of the most likely applications is facial recognition technology - a tool that is already widely in use.

Facial recognition software could become more widely used, identifying suspects and missing persons by comparing their faces to a database
9
Facial recognition software could become more widely used, identifying suspects and missing persons by comparing their faces to a databaseCredit: Getty

The software locates a face in surveillance footage and maps its key features like the distance between the eyes and the shape of the lips.

The machine then compares the facial template to a database of known faces to make a match.

Proponents say this could be used to identify suspects or missing persons, but concerns about privacy and civil liberties continue to present a challenge.

For starters, there's the possibility of algorithmic discrimination, where an AI system amplifies harmful stereotypes.

The technology could intensify racial and gender biases, leading to unfair treatment.

Moreover, there's the possibility that tools like facial recognition could be weaponized against journalists or political opponents.

With the positives come the negatives, and it is easy to see how facial recognition tech could be used to target journalists and political opponents
9
With the positives come the negatives, and it is easy to see how facial recognition tech could be used to target journalists and political opponentsCredit: Getty

AI lawyers

Analysts predicted the legal industry would see job losses due to AI in 2023 and no such thing happened.

But is it too soon to assume the storm has blown over?

A report from Goldman Sachs estimated that nearly half of legal work could be automated.

And a study by researchers at Princeton University, the University of Pennsylvania, and New York University deduced that law is the most exposed to new tech.

It is easy to see how chatbots that specialize in mimicking speech could intrude on legal work, especially as technology continues to get better at analysis and human-like language.

Steven Schwartz, a New York lawyer, admitted to using ChatGPT to research a brief, which cited six nonexistent court decisions
9
Steven Schwartz, a New York lawyer, admitted to using ChatGPT to research a brief, which cited six nonexistent court decisionsCredit: Getty

OpenAI even announced earlier this month that it had trained a model called CriticGPT to catch errors in ChatGPT's output.

"We found that when people get help from CriticGPT to review ChatGPT code they outperform those without help 60% of the time," the company wrote in a press release. 

However, the technology has its flaws, notably its tendency to make up information, or "hallucinate" in AI parlance.

And while proponents insist these defects can be fixed, they carry gravity in an industry that hinges on the interpretation and analysis of facts.

Schwartz, his partner, and their firm were all hit with sanctions and ordered to pay thousands of dollars
9
Schwartz, his partner, and their firm were all hit with sanctions and ordered to pay thousands of dollarsCredit: AFP

Stoking on naysayers are examples like the case of New York lawyer Steven Schwartz.

Schwartz admitted in May 2023 that he had used ChatGPT to help research a brief, which cited six fake court decisions, in a client's personal injury case.

Schwartz said at a hearing the following month that he "never" imagined ChatGPT could lie and did not intend to mislead the court.

Prompted in part by Schwartz's case, a federal judge in Texas issued a requirement for lawyers to certify they did not use AI to draft their filings without a human checking their accuracy.

A judge later imposed sanctions on Schwartz, his partner Peter LoDuca (whose name was on the brief), and their firm, ordering them to pay a $5,000 fine.

Legal tech startups are attempting to curtail hallucinations with the creation of software that runs on top of chatbots like Casetext.

These programs can comb through legal documents, draft deposition questions, and even propose contract revisions.

Morehouse College plans to adopt AI teaching assistants that will be available any time to answer students' questions
9
Morehouse College plans to adopt AI teaching assistants that will be available any time to answer students' questionsCredit: Morehouse College

AI professors

While concern about AI taking jobs is nothing new, it may be happening sooner than you think.

This fall, Morehouse College plans to introduce AI teaching assistants - three-dimensional avatars complete with digital whiteboards.

Unlike actual professors, these robots don't have to eat, sleep, or take personal time, meaning they will be available for students 24/7.

Students will access the program through the Google Chrome browser and type their questions into a box or speak aloud.

The virtual assistant will return a verbal response in the student's native language to mirror the classroom experience.

The endeavor is spearheaded by Muhsinah Morris, a senior professor in education at Morehouse.

Morris has denied that students' questions will be used to train any large language model.

The school has experimented with emergent technology in the past, launching the nation's first so-called metaversity three years ago
9
The school has experimented with emergent technology in the past, launching the nation's first so-called metaversity three years agoCredit: Alamy

Every professor is expected to adopt an AI assistant in three to five years.

And this won't be the school's first foray into cutting-edge tech.

Morehouse partnered with VictoryXR, a VR education software company, to launch the country's first "Metaversity" in 2021.

READ MORE SUN STORIES

Students donned Meta Quest VR headsets and attended virtual lessons as the Covid-19 pandemic raged.

According to Morris, this experiment made Morehouse the blueprint for virtual reality classrooms at other historically Black colleges and universities.

What are the arguments against AI?

Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:

Loss of jobs - Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn't function otherwise.

Ethics - When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.

Privacy - Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.

Misinformation - As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google's generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects - such as AI prescribing the wrong health information.

Topics