A recent study conducted by researchers from Virginia Tech sheds light on potential biases within the artificial intelligence (AI) chatbot ChatGPT, specifically in its provision of information related to environmental justice issues. The findings suggest that ChatGPT exhibits constraints in delivering area-specific information, with notable variations in output across different counties in the United States.


According to the report, the researchers identified limitations in ChatGPT's ability to provide locally tailored information on environmental justice issues, emphasizing a trend where such information is more readily available for larger, densely populated states. In contrast, regions with smaller populations, particularly in rural states like Idaho and New Hampshire, experienced a lack of equivalent access, with over 90% of the population residing in counties that could not receive local-specific information.

The report raises concerns about potential geographic biases in the ChatGPT model, calling for further research to address and rectify these limitations. Kim, a lecturer from Virginia Tech's Department of Geography, emphasized the importance of continued study as biases are being uncovered.

"While more study is needed, our findings reveal that geographic biases currently exist in the ChatGPT model," stated Kim.

The research paper includes a map illustrating the extent of the U.S. population without access to location-specific information on environmental justice issues, providing a visual representation of the disparities observed.

This development comes in the wake of recent revelations regarding potential political biases exhibited by ChatGPT. A study conducted by scholars from the United Kingdom and Brazil highlighted errors and biases in the output text of large language models like ChatGPT, raising concerns about misleading information.

As the AI community grapples with the ethical implications of biased algorithms, the study on ChatGPT's geographic constraints adds another layer to the ongoing discourse on responsible AI development and the need for continuous evaluation and improvement in addressing biases in AI systems.

(CIARAN LYONS, COINTELEGRAPH, 2023)