Artificial Intelligence (A.I.) is no longer a distant future—it’s reshaping how we live and work today. With an estimated 378 million users worldwide and a market projected to hit $244 billion this year, A.I.’s influence is expanding rapidly. But what does this mainstreaming of A.I. mean for society?
Rashmi Sharma, assistant professor and educational technologist at Western Illinois University, traces A.I.’s evolution from a decades-old academic field through waves of progress and setbacks to today’s explosion of powerful tools. Her research highlights the profound benefits and serious risks A.I. poses—and why thoughtful governance is essential to harness its promise.
'It went through waves of optimism, disappointment (aka 'A.I. winters') and revival as new methods and computer power arrived,' she added.
Moments central to its continued evolution include symbolic/knowledge- based A.I. in the mid-20th century; computer advancements from the 1980s-2000s and now, to the recent explosion of large-scale deep learning transformers in the last 13 years, Sharma explained. Those breakthroughs, plus massive computer cloud data centers and open research, led to the generative A.I. systems people use today, she noted.
What does this mean for the future, and where is A.I. headed? Expect smarter models tailored to fields like healthcare, law, and finance.
A.I. will increasingly embed itself into everyday software, devices, and services, including government and medical workflows.
Meanwhile, governments and industries are racing to establish regulations and safety measures to manage risks and ensure efficiency.
'Some of these trends are already visible in deployments and policy pilots today,' she added. 'And like anything surrounding technology there are advantages, disadvantages and dangers involving A.I.
Advantages, 'for the good,' include the automation of routine tasks, which save time; helping professionals, such as doctors, lawyers and researchers; greater accessibility for translation and captioning, and new creative tools, such as art, music and story ideas.'
And with the 'for the good' comes the dangers and the negatives, such as misinformation, misuse and social harm, including fake profiles and realistic 'deepfakes' (a manipulated video, audio or photo created using A.I. that makes a person appear to do or say something they did not do).
Other risks include bias and unfair outcomes as models trained on biased data can produce discriminatory outputs; job displacement; misuse of facial recognition and behavioral profiling. The environmental costs are high as running A.I. data centers uses an enormous amount of electricity and water, as well as increased emissions, Sharma explained.
'A.I. makes it easier to produce convincing fake images, audio, or video and to create fake social-media personas and coordinated disinformation. There are detection tools and legal measures being developed, but it’s a cat-and-mouse problem in that as detection gets better, generation gets better too,' she noted.
'Today's reality is that deepfakes have already been used in politics and crimes, and tackling them requires tech, policy and media/A.I. literacy.'
Using deepfakes as an example, Sharma stated there is no foolproof single method to determine the legitimacy of a video, photo, or audio recording, but there are practical signs and steps.
First, check the source.
Who posted it? Verify via official accounts, multiple reputable outlets or a reverse-image search. Look for technical glitches. Minor inconsistencies such as odd lighting, mismatched reflections, blinking/eye issues in video, garbled text in images or unnatural phrasing can sometimes give a fake image or video away. Look for tags or watermarks. Professional outlets increasingly add an official tag or watermarks, or a link to the origin; the absence is a red flag. Verify claims with other independent sources and primary documents. Detection tools exist to check for A.I.-generated content.
'First and foremost, experts emphasize skeptical reading and verification as the first line of defense,' Sharma said.
Despite its risks and uncertainties, A.I. is rapidly expanding across industries. Big tech firms and startups alike use A.I. to automate customer support and marketing. In healthcare, administrative tasks are increasingly handled by A.I., which is also being tested to approve or deny insurance claims—raising concerns about safety and fairness. Experts emphasize that human review remains essential for life-or-death or high-stakes decisions. Meanwhile, regulators are considering rules that require explanations and human oversight. At the same time, demand is growing for tools that help patients challenge claim denials.
Within education, A.I. can be used for content creation and grading and tutoring assistance. According to Sharma, within the education sector, A.I. can provide personalized learning, give instant feedback and help students practice, but it also makes plagiarism and shortcutting easier.
Outside of business and industry, A.I. platforms, such as ChatGPT, are being used by individuals from around the world to help write reports, generate images, search the web and more, Sharma pointed out.
With all of its uses across numerous platforms, is A.I. controlling us?
'Not in the sci-fi sense of taking over humans, but systems do shape choices and behavior in real ways: recommendation algorithms steer attention; targeted advertising nudges consumption; automated decisions affect hiring, lending and healthcare pre-authorizations,' she said. 'When those systems are opaque, optimized for engagement or cost-cutting, or lack adequate human oversight, they can exert outsized influence that looks like 'control.' The remedy is transparency, auditability and human oversight.
'There's no credible evidence that current models are autonomous agents with independent goals. The bigger, realistic risks are social and systemic (misinformation, biased automated decisions, concentration of power, safety failures), not a Hollywood takeover (such as The Terminator),' Sharma added. 'Still, A.I. safety researchers call for careful governance as systems grow in capability.'
Recent news stories and reports have shown that young people are using A.I. chatbots for emotional support, and some interactions have been harmful and inconsistent, particularly when responding to someone who is experiencing suicidal ideations.
'Researchers and regulators are increasingly warning that chatbots are not a substitute for trained human care; studies call for guardrails, clearer safety standards and better crisis-response behavior from companies,' she said. 'Lawsuits and investigations are also underway in some high-profile cases. This is a serious, active area of concern and research.'
The pros and cons of A.I. run the gamut, Sharma concluded, with the pros including improved diagnostics, greater accessibility tools, necessary automation for routine tasks and the acceleration of research.
Whether the good outweighs the bad depends on choices, including regulation, transparency, funding for safety and ethics, among others.
'With the right governance — audits, liability rules, human-in-the-loop mandates for high-stakes decisions, and investment in public-interest models — the benefits can be shared more widely,' she said.
'Without them, harms can concentrate and grow.'
Editor’s Note: As we were working on this story, The Community News Brief received its monthly edition of Press Lines, a publication from the Illinois Press Association, which contained an announcement from the Gannett conglomerate regarding its new partnership with Perplexity A.I.
Essentially, Gannett-owned papers, such as USA Today, McDonough County Voice, and Galesburg Register Mail, will be using this new A.I. service to curate “local” stories. What does that mean for the reader? This A.I. service, which uses a web browser, can take questions from readers and create “stories” based on the questions for the “local journalists” to work with. According to Gannett: “This partnership ensures that when users ask questions about their communities, they get answers grounded in our verified reporting.”
In researching, the Community News Brief staff found several major publishers, including News Corp (owner of The Wall Street Journal and New York Post), The New York Times, and the BBC, have taken legal action against the A.I. company Perplexity. These actions range from lawsuits to cease-anddesist letters.
Other media organizations— Forbes, the Chicago Tribune, the Denver Post, Encyclopedia Britannica, and Merriam-Webster—have joined broader legal challenges targeting Perplexity and A.I. giants like OpenA.I. and Microsoft.
The lawsuits accuse these companies of copyright infringement and of generating A.I. content that contains errors falsely attributed to the publishers, risking harm to their reputations.
At The Community News Brief our stories are written about local news by local journalists who ask the questions, get information, and craft the story for your reading pleasure. While A.I. has its place, such as for proofing and editing, we believe in journalism the “old fashioned way,” which our readers’ expect and deserve.









