The Dangers of AI Tech In Law Enforcement

As time progresses, technological advancements continue to shape our society, with AI technology gradually permeating various domains such as education, healthcare, scientific discoveries, and even law enforcement. Notably, police departments in the United States have embraced AI applications, utilizing facial recognition, predictive policing, and even automating parking ticket processes, with the potential to assist individuals in contesting or resolving such citations.

Integrating AI into law enforcement aims to improve crime-solving and prevention strategies, recognizing that it is not infallible. The objective is to foster innovative police services, establishing stronger connections with citizens, building trust, and strengthening community relationships. By leveraging AI technologies, law enforcement can potentially streamline their operations, allowing for a more hands-off approach while enhancing crime detection and recognition capabilities. However, it is crucial to strike a balance between the benefits of efficiency and the need to maintain human oversight to ensure accountability and protect individual rights.

 

It is crucial to acknowledge the potential detrimental effects of AI in law enforcement, as it is not foolproof despite its intended purpose of crime-solving and prevention. Instances of wrongful arrests and racial profiling have highlighted the dangers associated with AI implementation. For instance, a recent case reported in The Washington Post involved a 45-year-old man from Detroit who was falsely identified as a shoplifting suspect by the city’s facial recognition software, leading to a wrongful arrest and subsequent lawsuit against the police department.

 

This example underscores the urgent need for regulation and legal frameworks surrounding AI technology, as such incidents pose significant challenges. Taking a closer look at the risks AI brings, particularly its potential harm to minority communities, is essential. It is worth considering temporarily removing the technology from active deployment and conducting rigorous testing to address issues of racial profiling. By actively addressing and mitigating these concerns, AI can potentially be harnessed for various beneficial applications, such as locating missing persons and supporting law enforcement in other ways. However, until these concerns are adequately resolved, it is evident that the technology is not yet ready for widespread public use.

 

Lighthouse Silicon Valley Q1 Highlights: Their Promising Start to 2023

During this past Spring 2023 semester, Lighthouse Silicon Valley was a client for the Sbona Honors Program in Marketing and Business Analytics. This project was also in partnership with the Commonwealth Fund, Harkness Fellowship, and Stanford University for a national launch event based on a study centered around artificial intelligence and racial biases. The Sbona team consisted of both marketing and business analytics students, who conducted research on the student body about artificial intelligence and lockdown browser in addition to researching ethics within artificial intelligence.

Lighthouse Silicon Valley was founded in San Jose, CA in 2021. Lighthouse is aiming to address inequalities faced by marginalized communities by promoting engagement, fostering ecosystem development, creating safety net services, and supporting workforce development. They have prioritized Justice, Equity, Diversity, and Inclusion (JEDI) opportunities that aim to uplift vulnerable populations by providing them with family-sustaining wages.

In the Lighthouse Quarter 1 Executive Report, Quency Phillips, executive director of Lighthouse Silicon Valley reflects on the accomplishments of the company. In the past 21 months, Lighthouse has been able to double its network size, and increase staff capacity, while becoming a 501(c)3 organization. They have also received recognition from the National Science Foundation, secured major donors, and became a member of the City of San Jose COVID Task Force as well as the Santa Clara County Climate Collaborative Leadership Advisory Team.

Lighthouse understands that although much has been accomplished there is more they can do for the community. Their team is actively working on workforce development, sustainability, and advocating for policy changes with a plan that works in two phases. Phase one involved continuing to find partnerships while phase two is focused on using their 501(c)3 organization to support their network of 150 stakeholders to establish stability. Lighthouse plans to meet with over 50 organizations on May 31st, to discuss curriculum and workforce development within K-12 education, adult education, immigrant, reentry, and refugee communities. This plan aims to collaborate with trades, labor, and community college partners to ensure inclusivity. In conclusion, the first quarter of Lighthouse Silicon Valley’s performance has been remarkable, and am confident that 2023 will hold great success for the company.

Artificial Intelligence Within the Hiring Process

Artificial intelligence (AI) is an important tool that stimulates human intelligence within technology. It has been growing rapidly in the past two years and is now being used within the hiring process in many big companies for resume screening. Through its resume screening process, artificial intelligence technology is finding potential candidates that match the required job qualifications within the job description. It scans thousands of resumes and flags those that meet the requirements with the main keywords from the job description. Resume screening with artificial intelligence helps save hiring managers time and can put their energy into another project, but it comes with its downsides. Since AI stimulates human intelligence, it is only as good as the information it was fed, making the decision-making process have potential factors of replicating human bias (gender, race, and/or age), over-rely on the required keywords, and overlooking skills they are unable to identify. With these two downfalls, it could overlook a possible incredible candidate. Candidates may feel like they are being evaluated solely based on their ability to meet certain technical criteria, rather than on their potential to contribute to the company culture and overall success.

Even though artificial intelligence is a big program being used in the hiring process many companies like Workday for example are still quickly reading through each resume because they know that artificial intelligence makes mistakes. On the other hand, “currently, there have been many businesses, Amazon to be specific, who have implemented trial runs integrating AI into the hiring process. What came of this is an extremely biased system that is completely reliant on patterns (Dastin, 2018).” The system used by Amazon exhibited a bias towards women and showed patterns of choosing male candidates.

As a female college student, this causes some concern because college students are beginning their careers and building their resumes, but with AI this could be a disadvantage. As stated above, AI programs filter out resumes with certain formats and required keywords including skills or experience, but for college students with little to no experience, it will make it much harder to get an interview even for an entry-level position. With gender bias, female college candidates will be pushed out of the system without being given a chance due to the system creating a bias towards women and running on a pattern. A solution for college students to have a chance with the AI programs is to tailor their resumes to specific job postings, using relevant keywords, and highlighting their achievements. In conclusion, the use of artificial intelligence being used in the hiring process for resume screening has its benefits, but it also has a fair share of disadvantages. Employers should work towards addressing bias and ensuring that each candidate is given a fair and equal opportunity.

AI and Lockdown Browser: The Unfair Impact on Students of Color and Black Students

As a student of color, I have personally witnessed how Lockdown Browser and other AI-based exam-proctoring tools have caused injustice and inequality. These tools have become prevalent in educational institutions to counter cheating in online exams, but they have resulted in unintended outcomes that affect black students and students of color more than others. Lockdown Browser has a flaw as it uses facial recognition technology to monitor students during exams. This technology has proven to be discriminatory against people of color and black people, resulting in inaccurate facial recognition. Consequently, students may be penalized unfairly due to false positives or negatives, leading to allegations of cheating or technical difficulties that prevent them from completing their exams. 

In addition, the implementation of Lockdown Browser may worsen the existing inequality in our education system, particularly in terms of access to technology. Students from economically disadvantaged or rural areas may not have the necessary resources, such as high-speed internet or compatible devices, to effectively use the software. This creates an unfair disadvantage for these students compared to their more affluent or privileged classmates. As an individual who is deeply committed to promoting fairness and equality in education, I strongly believe that it is imperative to voice opposition to the implementation of Lockdown Browser and other comparable AI-driven exam monitoring systems. It is essential that we identify alternative methods of preventing academic dishonesty that does not hinge upon prejudiced and unjust technology. 

In order to gain a deeper understanding of student perspectives and experiences regarding Lockdown Browser, we undertook a survey to gather feedback on the use of this software in remote exams. The findings of the survey indicated that a significant proportion of students are uneasy with the use of facial recognition technology for exam proctoring and that a number of them have encountered technical issues that have adversely affected their exam results. To sum up, the utilization of Lockdown Browser and other exam proctoring tools based on artificial intelligence can have significant consequences on the fairness and equality of education. It is imperative that we collaborate as a society to discover alternative measures that do not depend on prejudiced technology and that put the welfare and achievements of every student first.

Santa Clara University’s Markkula Center for Applied Ethics

https://www.scu.edu/ethics-spotlight/generative-ai-ethics/the-ethics-of-ai-applications-for-mental-health-care/

The Santa Clara University Ethics Center focuses on preparing individuals, particularly students, to make ethical decisions and develop ethical decision-making skills. They offer various fellowships and internships to help individuals build these skills. The center has evaluated its values, vision, and strategic priorities in the past year, and has welcomed new members to the team, including Dorothee Caminiti, who focuses on the ethical issues related to personalized medicine, and Sarah Cabral, who leads the business ethics internship. The center aims to build a more ethical future with the support of donors and partnerships.

 

An article published on their website discusses the concept of ethics and its importance in various aspects of life. Ethics involves standards and practices that guide our behavior, including in personal, professional, and societal contexts. The article also clarifies what ethics is not, including feelings, religion, following the law, following culturally accepted norms, or science, as these do not necessarily dictate what is ethical. Instead, ethics requires knowledge, skills, and habits to make informed decisions that align with high ethical standards.

 

Additionally, there are subtopics within the discussion of ethics in AI. Thomas Plante wrote an article about the implications of using artificial intelligence (AI) for mental health treatment. While there are many AI-based mental health applications available today, research is needed to determine their effectiveness, and ethical issues need to be addressed. First, engineers and computer scientists should work alongside licensed mental health professionals to ensure that their products and services are safe and effective. Second, mental health applications need to maintain strict confidentiality to protect user privacy. Third, while preliminary research suggests that AI-based mental health applications may be helpful for mild to moderate symptoms, they may not be appropriate for more severe symptoms or psychopathology. Despite these potential issues, the author notes that AI-based mental health applications may be a boon to treating more people in a more affordable and convenient way, particularly given the current mental illness epidemic. However, research is needed to ensure that these applications are based on solid empirical evidence and best clinical practices.



Santa Clara University