The Dangers of AI Tech In Law Enforcement

As time progresses, technological advancements continue to shape our society, with AI technology gradually permeating various domains such as education, healthcare, scientific discoveries, and even law enforcement. Notably, police departments in the United States have embraced AI applications, utilizing facial recognition, predictive policing, and even automating parking ticket processes, with the potential to assist individuals in contesting or resolving such citations.

Integrating AI into law enforcement aims to improve crime-solving and prevention strategies, recognizing that it is not infallible. The objective is to foster innovative police services, establishing stronger connections with citizens, building trust, and strengthening community relationships. By leveraging AI technologies, law enforcement can potentially streamline their operations, allowing for a more hands-off approach while enhancing crime detection and recognition capabilities. However, it is crucial to strike a balance between the benefits of efficiency and the need to maintain human oversight to ensure accountability and protect individual rights.

 

It is crucial to acknowledge the potential detrimental effects of AI in law enforcement, as it is not foolproof despite its intended purpose of crime-solving and prevention. Instances of wrongful arrests and racial profiling have highlighted the dangers associated with AI implementation. For instance, a recent case reported in The Washington Post involved a 45-year-old man from Detroit who was falsely identified as a shoplifting suspect by the city’s facial recognition software, leading to a wrongful arrest and subsequent lawsuit against the police department.

 

This example underscores the urgent need for regulation and legal frameworks surrounding AI technology, as such incidents pose significant challenges. Taking a closer look at the risks AI brings, particularly its potential harm to minority communities, is essential. It is worth considering temporarily removing the technology from active deployment and conducting rigorous testing to address issues of racial profiling. By actively addressing and mitigating these concerns, AI can potentially be harnessed for various beneficial applications, such as locating missing persons and supporting law enforcement in other ways. However, until these concerns are adequately resolved, it is evident that the technology is not yet ready for widespread public use.

 

AI and Lockdown Browser: The Unfair Impact on Students of Color and Black Students

As a student of color, I have personally witnessed how Lockdown Browser and other AI-based exam-proctoring tools have caused injustice and inequality. These tools have become prevalent in educational institutions to counter cheating in online exams, but they have resulted in unintended outcomes that affect black students and students of color more than others. Lockdown Browser has a flaw as it uses facial recognition technology to monitor students during exams. This technology has proven to be discriminatory against people of color and black people, resulting in inaccurate facial recognition. Consequently, students may be penalized unfairly due to false positives or negatives, leading to allegations of cheating or technical difficulties that prevent them from completing their exams. 

In addition, the implementation of Lockdown Browser may worsen the existing inequality in our education system, particularly in terms of access to technology. Students from economically disadvantaged or rural areas may not have the necessary resources, such as high-speed internet or compatible devices, to effectively use the software. This creates an unfair disadvantage for these students compared to their more affluent or privileged classmates. As an individual who is deeply committed to promoting fairness and equality in education, I strongly believe that it is imperative to voice opposition to the implementation of Lockdown Browser and other comparable AI-driven exam monitoring systems. It is essential that we identify alternative methods of preventing academic dishonesty that does not hinge upon prejudiced and unjust technology. 

In order to gain a deeper understanding of student perspectives and experiences regarding Lockdown Browser, we undertook a survey to gather feedback on the use of this software in remote exams. The findings of the survey indicated that a significant proportion of students are uneasy with the use of facial recognition technology for exam proctoring and that a number of them have encountered technical issues that have adversely affected their exam results. To sum up, the utilization of Lockdown Browser and other exam proctoring tools based on artificial intelligence can have significant consequences on the fairness and equality of education. It is imperative that we collaborate as a society to discover alternative measures that do not depend on prejudiced technology and that put the welfare and achievements of every student first.

Overcoming the Technocratic Paradigm: AI Bias in Healthcare

Overcoming the Technocratic Paradigm: AI Bias in Healthcare by Bridgitte Chan.

We currently live in a world that is driven by the technocratic paradigm. This worldview highly values the use of technology for problem-solving and decision-making; everything around us is seen as a problem waiting to be solved through scientific knowledge and technological power. Although the technocratic paradigm has proven itself to be useful in the past, it contains many pitfalls. Most importantly, it perpetuates the false dichotomy between STEM and humanities. 

The technocratic paradigm ultimately devalues the humanities and disregards the mutually reinforcing relationship between the two fields. It promotes this notion that science and math are inherently more legitimate because they are objective truths, in comparison to humanities, which is considered to be more subjective and open to interpretation. This is wrong for two main reasons. First and foremost, humanities fields also rely on empirical evidence and call for rigorous analysis and research; just because the data interpreted is typically more culturally based, does not make it any less factual than math and science. Secondly, it ignores the broader social and ethical contexts that impact the field of STEM, this intrinsically dehumanizes decision-making and reduces people to mere data points. The technocratic paradigm treats the human body as a machine and uses the white, male form as its standard; this poses some very obvious problems. As we begin to think about the ways humanities have historically been ignored in the field of science, the harmful implications of doing so become resoundingly clear. 

I am passionate about implementing the humanities and arts into the field of STEM. In other words, I am an advocate for the conversion of STEM into STEAHM because it forces us to question the narratives and ethics behind the things we have been taught to accept as undeniable facts. This shift has become even more vital with the emergence of AI. Artificial intelligence has made significant advancements in recent years and as AI continues to grow at a rapid rate, so does its ability to impact the lives of every human on this planet. AI replicates the world as it exists and makes decisions based solely on mathematics, unaware of any ethical and moral nuances. It is evident that AI has exposed society’s current biases in how it performs when identifying human faces. Not only do facial recognition algorithms work better on male faces than female, but it sometimes fails to detect darker skin tones altogether and there are instances that go beyond this. For example, Amazon tried to use AI to speed up its hiring process and it automatically rejected women because technological jobs and positions of power have not been traditionally held by women. If this is how AI performs when trying to hire people, what could this mean for the future of AI within our medical system?

AI is a direct reflection of our world because it is based on the data it is fed. It would be naive to believe the current medical data and knowledge we have is devoid of misconceptions–predominantly derived from problematic Western worldviews. Whether it is a conscious decision or not, people inevitably embed their own biases into technology mainly because the data being used is largely skewed. Since our history is filled with inequalities, the data currently being used to train AI frequently excludes and can even work against BIPOC as well as other marginalized groups. When we look at the ways the American healthcare system has violently mistreated Black people, especially Black women, it is only logical to assume AI would worsen these disparities. If we do not take swift action to fix this, further disparities will be created and a new form of injustice, known as algorithmic injustice, will run rampant. 

It is crucial for us to illuminate the biases that exist within these algorithms so we can actively combat them and ensure that the future applications of AI are accurate, equitable, and ethical–especially in the realm of healthcare and medicine. AI has the potential to bring about a multitude of advantages and enhance the way we live, but there is a lot of work that needs to be done beforehand. It is time for developers to cultivate an improved version of AI that is inclusive and created with diverse populations in mind. This can only be done if we unite the fields of humanities and STEM. We must be willing to further our cultural competencies and embrace the ideologies more exclusively taught in humanities. We must be cognizant of who is developing the technology and whom, or what, they are developing AI for. Does it serve the public interest? Or is it being built for profit? Is it ethical? Are the people in charge aware of the social responsibilities that come with creating artificial intelligence? Asking these questions is the first step in achieving algorithmic justice and it is our duty to challenge and overcome the discriminatory practices brought about by the technocratic paradigm; only then can we truly reap the benefits of AI.

 

Hello SJSU!

Welcome to your brand new blog at SJSU Blogs.

To get started, simply log in, edit or delete this post and check out all the other options available to you.

For assistance, visit Edublogs comprehensive support site, check out Edublogs User Guide guide or stop by The Edublogs Forums to chat with other edubloggers.

You can also subscribe to our brilliant free publication, The Edublogger, which is jammed with helpful tips, ideas and more.