Uncategorized
Google Gemini is ‘the tip of the iceberg’: AI bias can have ‘devastating impact’ on humanity, say experts
Substantial backlash against Google’s Gemini artificial intelligence (AI) chatbot has elevated concern about bias in large language models (LLMs), but experts warn that these issues are just the “tip of the iceberg” when it comes to the potential impact of this tech across industries.
The rapid advancement of AI has led to significant advancements in various fields. It can help analyze medical imagery like mammograms and X-rays, accelerate the development of new drug treatments, optimize energy use and assist businesses in making informed decisions based on sorting large quantities of data.
However, the adoption of AI by governments and corporations for its problem-solving capabilities has also been met with considerable caution.
Adnan Masood is recognized as Microsoft Regional Director and MVP (Most Valuable Professional) for Artificial Intelligence by Microsoft. As Chief Architect of AI and Machine Learning at UST, he collaborates with Stanford Artificial Intelligence Lab, MIT CSAIL, and leads a team of data scientists and engineers building AI solutions.
GOOGLE GEMINI: FORMER EMPLOYEE, TECH LEADERS SUGGEST WHAT WENT WRONG WITH THE AI CHATBOT
“Artificial intelligence is an amazing catalyst for digital transformation. Everywhere from wealth management to population health to touchless retail operations, technologies like machine learning and computer vision are making algorithms fast, portable and ubiquitous,” he told Fox News Digital.
But he also highlighted the possibility of substantial downsides as well. Asking people to identify the threat they see coming from AI will garner a range of answers, from “the robots are taking our jobs” to “Big Brother is watching us.” While Masood admits these are all reasonable concerns, he believes the most significant challenge humanity faces from AI lies at the “heart” of its algorithms.
“AI systems are not created in a vacuum. Their behaviors reflect the best – but also the very worst of human characteristics,” he told Fox News Digital. “These models are prejudiced – and it is up to us to fix them.”
According to Masood, self-perpetuating bias is the biggest threat posed by AI and can have a “devastating impact” on health, job opportunities, access to information and even democracy.
The question is: What can society do to modify data that simply reflects ingrained societal biases? Currently, there are no regulations around algorithmic accountability. Masood believes some organizations and governments are making progress.
“With AI evolving at such a dramatic speed, already-problematic societal inequalities are being reinforced even as I write. And if we don’t tread carefully, these models will cause irreparable damage,” he said.
LexisNexis Risk Solutions Global Chief Information Security Officer Flavio Villanustre told Fox News Digital the potential impact of AI models can range from “slightly inappropriate” responses to outcomes that could break existing anti-discrimination laws. Depending on its application, AI could cause issues in company hiring processes and wrongly inform decisions related to state benefits eligibility, loan rates, college admissions and “countless” other possibilities.
Masood agreed that wrongfully using automation to streamline state government work and the recruitment screening process is a salient yet everyday example of how machine learning algorithms can exacerbate systematized biases.
In 2018, Amazon discovered that its AI hiring software discriminated against resumes mentioning women and candidates from all-women colleges. The algorithm was merely basing its decisions on the company’s limited history of hiring female engineers and computer scientists. The software was later scrapped.
That same year, studies found that Microsoft AI’s facial recognition software assigned Black males more negative emotions than their White counterparts.
“These examples are just the tip of the iceberg when it comes to the way technology can amplify oppression and undermine equality,” he said. “With AI becoming more ubiquitous, cases become larger by orders of magnitude, paving the way to a dystopian future of machine-rule.”
Masood noted there are also troubling instances of AI racism embedded in justice systems, as was the case when it was discovered the COMPAS algorithm had discriminated against people of color.
RACIAL BIAS IN ARTIFICIAL INTELLIGENCE: TESTING GOOGLE, META, CHATGPT AND MICROSOFT CHATBOTS
“If these issues are not concerning enough, we are starting to see a more pervasive use of these models in medical applications for diagnostic and therapeutic purposes. If, due to bias, a model incorrectly assesses the condition of a patient or the appropriate treatment, it could lead to life-altering consequences,” Villanustre added.
Kirk Sigmon, a Washington D.C.-based attorney specializing in artificial intelligence/machine learning (ML) intellectual property, suggested that “virtually all” AI models are biased.
He noted that artificial neural networks are trained on voluminous amounts of data based on what is available. This includes texts from books, images from the Internet, and more. As a result, whatever limitations are present in that data become “weaknesses” of the trained model.
“Google’s approach to hiding the bias in its models is via secret prompt engineering – that is, changing the nature of what you ask the model to do by adding additional words/content (like multicultural). This also seems to be the approach taken by OpenAI in their ChatGPT product. The problem is, they’ve not actually fixed the underlying bias at all: they’re just secretly changing what users ask for to avoid public relations issues, promote a particular agenda, or the like,” he told Fox News Digital.
Sigmon said this often results in significantly less helpful, “if not outright comical” outputs, discouraging users from using the tool. Gemini was the latest AI to face heat for its responses after it produced historically inaccurate images that downplayed or outright removed White people. This led to public apologies by Google and a pause on the image generation feature.
“If society plans to increasingly rely on tools like ChatGPT and Gemini, the implications of secretive prompt engineering can be frightening. We might not have much of an issue with an AI model trying to avoid outright or inadvertent racism, but the very same secret prompt modification strategies might be used to change the public’s perception of historical events, bury company scandals, or the like,” Sigmon said.
ARTIFICIAL INTELLIGENCE IS BIG, BUT ARE COMPANIES HIRING FOR AI ROLES TOO FAST?
“In other words, the very same strategies used by Google to ensure output is multicultural and inoffensive could be used to manipulate the public in extremely damaging ways,” he added.
Ruby Media Group CEO Kris Ruby, who recently uncovered a trove of data on Gemini, told Fox News Digital that biased AI can recreate societal norms, cultures and values that can strip historical context. If facts are removed or altered, a corporation can cultivate its own set of “facts” that align with its personal worldview.
Ruby, who wrote “The Ruby Files – The Real Story of AI Censorship,” stressed those in charge of shaping the current information environment must be held accountable, as the architecture of AI products can alter the future digital landscape society depends on for education and commerce.
Furthermore, if the data scientists responsible for making critical decisions lack political diversity, users will be left with a “lopsided product” that “skews to the collective bias of a product team.”
“AI is transforming our society,” she added. “As we become more dependent on a modern digital infrastructure embedded with machine learning, we must understand the foundation of the models and how those models are built. Historical accuracy of individual datasets used to build a product is just as important as modern-day historical output. We cannot understand where we are going if we do not understand where we came from.”
Former Fortune 100 emerging technology executive Sonita Lontoh told Fox News that Digital boards and business leaders need to understand that AI bias exists and has exacerbated class-based and race-based inequities in healthcare and creditworthiness assessments via mortgage approval algorithms in the past.
A class-action lawsuit filed in December claims that the health insurance company Human used the AI model nHPredict to deny medically necessary healthcare for disabled and elderly patients covered under Medicare Advantage.
HOW AI IS REVOLUTIONIZING THE WORLD OF MEDICINE
A month earlier, another lawsuit alleged that United Healthcare also used the nHPredict model to reject specific claims despite knowing that the tool was faulty and had contradicted physicians’ conclusions.
“Biases infiltrate AI because algorithm is like an opinion. Biases can enter throughout the AI lifecycle — from the framing of the problem the AI is trying to solve, to product design and data collection, to development and testing. As such, risks and controls should occur at each stage of the AI lifecycle,” she told Fox News Digital.
Lontoh, a board member of several NYSE and Nasdaq-listed companies, said board members need a game plan to monitor and institute AI governance that includes collaboration with internal and external experts.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
A publication from the National Institute for Standards in Technology (NIST) examines bias in AI. In the U.S., the Accountability Act requires bias to be addressed in corporate algorithms. Under the General Data Protection Regulation (GDPR), the E.U. introduced the right to be informed of an algorithm’s output. The Singapore Model AI Governance Framework has a strong focus on internal governance, decision-making, models, operations management and customer relationship management.
“There are many more disparate examples. But algorithms operate across borders; we need global leadership on this. By providing stakeholders and policymakers with a broader perspective and necessary tools, we can stop the bigot in the machine from perpetuating its prejudice,” Masood said.
However, he remains optimistic that humanity can make AI work to its benefit.
Uncategorized
Urgent Money Miracle – $2+ EPC! Get Instant 90% Commission Bump
Product Name: Urgent Money Miracle – $2+ EPC! Get Instant 90% Commission Bump
All orders are protected by SSL encryption – the highest industry standard for online security from trusted vendors.
Urgent Money Miracle – $2+ EPC! Get Instant 90% Commission Bump is backed with a 60 Day No Questions Asked Money Back Guarantee. If within the first 60 days of receipt you are not satisfied with Wake Up Lean™, you can request a refund by sending an email to the address given inside the product and we will immediately refund your entire purchase price, with no questions asked.
Uncategorized
NEW! Christian Wealth Manifestation – Highly Targeted For Christians!
Product Name: NEW! Christian Wealth Manifestation – Highly Targeted For Christians!
All orders are protected by SSL encryption – the highest industry standard for online security from trusted vendors.
NEW! Christian Wealth Manifestation – Highly Targeted For Christians! is backed with a 60 Day No Questions Asked Money Back Guarantee. If within the first 60 days of receipt you are not satisfied with Wake Up Lean™, you can request a refund by sending an email to the address given inside the product and we will immediately refund your entire purchase price, with no questions asked.
Uncategorized
Predictions for Mortgage Rates in 2024: What to Expect
As we look ahead to 2024, many homeowners and prospective buyers are wondering what to expect when it comes to mortgage rates. The landscape of the housing market is constantly changing, so it’s important to stay informed about trends and predictions. In this blog post, we will discuss some factors that could impact mortgage rates in 2024 and what homeowners and buyers can expect.
One factor that could impact mortgage rates in 2024 is the overall state of the economy. If the economy is strong and growing, we may see higher mortgage rates as the Federal Reserve looks to combat inflation. On the other hand, if the economy is stagnant or in a recession, we may see lower mortgage rates as the Fed looks to stimulate growth. It’s important to keep an eye on economic indicators such as GDP growth, unemployment rates, and inflation to get a sense of where mortgage rates may be heading.
Another factor that could impact mortgage rates in 2024 is Federal Reserve policy. The Fed plays a key role in setting interest rates, and their decisions can have a ripple effect on mortgage rates. If the Fed decides to raise interest rates in response to inflation, we may see an increase in mortgage rates. Conversely, if the Fed decides to lower interest rates to stimulate growth, we may see a decrease in mortgage rates. Keeping up with the latest news and announcements from the Fed can give homeowners and buyers a sense of where mortgage rates may be heading.
In terms of specific cities and local mortgage companies, it’s important to note that mortgage rates can vary depending on location and lender. For example, in a city like New York City, where real estate prices are high, mortgage rates may be higher compared to a city like Indianapolis, where real estate prices are lower. Additionally, local mortgage companies may offer competitive rates and terms compared to national lenders. For example, in New York City, local lenders like Quontic Bank and CrossCountry Mortgage may offer specialized products and services tailored to the needs of local buyers.
It’s important for homeowners and buyers to shop around and compare rates from multiple lenders to ensure they are getting the best deal. Websites like Bankrate and LendingTree can be helpful resources for comparing rates and terms from multiple lenders. Homeowners and buyers should also consider working with a mortgage broker who can help them navigate the lending process and find the best mortgage product for their needs.
In conclusion, predicting mortgage rates in 2024 is not an exact science, but there are several factors that could impact rates. By staying informed about economic indicators, Federal Reserve policy, and local market trends, homeowners and buyers can make informed decisions about their mortgage. Shopping around and comparing rates from multiple lenders is key to ensuring you are getting the best deal on your mortgage. Whether you’re looking to refinance your existing mortgage or buy a new home, it’s important to stay informed and be proactive in managing your mortgage.
-
Reverse Mortgage8 months ago
How Reverse Loans Can Provide Financial Relief in Retirement
-
Mortgage Rates8 months ago
Como puedo comprar una casa a crédito si no se nada?
-
Reverse Mortgage8 months ago
The Pros and Cons of Using a Reverse Mortgage for Retirement Planning
-
Mortgage Rates8 months ago
Niro Loan App 2024 || Niro App Se Loan Kaise Le || New Loan App Best Instant Loan App Without Cibi
-
Reverse Mortgage8 months ago
Exploring the Myths and Realities of Reverse Mortgages for Seniors in 2024
-
Reverse Mortgage8 months ago
Unlocking Your Home’s Value: Everything You Need to Know About HECM Loans
-
USDA Mortgage6 months ago
Making Your Dream of Country Living a Reality: FMHA Rural Home Loans in Focus
-
Reverse Mortgage8 months ago
Is a HECM the Right Choice for You? Exploring the Pros and Cons