Lighthouse Silicon Valley Q1 Highlights: Their Promising Start to 2023

During this past Spring 2023 semester, Lighthouse Silicon Valley was a client for the Sbona Honors Program in Marketing and Business Analytics. This project was also in partnership with the Commonwealth Fund, Harkness Fellowship, and Stanford University for a national launch event based on a study centered around artificial intelligence and racial biases. The Sbona team consisted of both marketing and business analytics students, who conducted research on the student body about artificial intelligence and lockdown browser in addition to researching ethics within artificial intelligence.

Lighthouse Silicon Valley was founded in San Jose, CA in 2021. Lighthouse is aiming to address inequalities faced by marginalized communities by promoting engagement, fostering ecosystem development, creating safety net services, and supporting workforce development. They have prioritized Justice, Equity, Diversity, and Inclusion (JEDI) opportunities that aim to uplift vulnerable populations by providing them with family-sustaining wages.

In the Lighthouse Quarter 1 Executive Report, Quency Phillips, executive director of Lighthouse Silicon Valley reflects on the accomplishments of the company. In the past 21 months, Lighthouse has been able to double its network size, and increase staff capacity, while becoming a 501(c)3 organization. They have also received recognition from the National Science Foundation, secured major donors, and became a member of the City of San Jose COVID Task Force as well as the Santa Clara County Climate Collaborative Leadership Advisory Team.

Lighthouse understands that although much has been accomplished there is more they can do for the community. Their team is actively working on workforce development, sustainability, and advocating for policy changes with a plan that works in two phases. Phase one involved continuing to find partnerships while phase two is focused on using their 501(c)3 organization to support their network of 150 stakeholders to establish stability. Lighthouse plans to meet with over 50 organizations on May 31st, to discuss curriculum and workforce development within K-12 education, adult education, immigrant, reentry, and refugee communities. This plan aims to collaborate with trades, labor, and community college partners to ensure inclusivity. In conclusion, the first quarter of Lighthouse Silicon Valley’s performance has been remarkable, and am confident that 2023 will hold great success for the company.

Artificial Intelligence Within the Hiring Process

Artificial intelligence (AI) is an important tool that stimulates human intelligence within technology. It has been growing rapidly in the past two years and is now being used within the hiring process in many big companies for resume screening. Through its resume screening process, artificial intelligence technology is finding potential candidates that match the required job qualifications within the job description. It scans thousands of resumes and flags those that meet the requirements with the main keywords from the job description. Resume screening with artificial intelligence helps save hiring managers time and can put their energy into another project, but it comes with its downsides. Since AI stimulates human intelligence, it is only as good as the information it was fed, making the decision-making process have potential factors of replicating human bias (gender, race, and/or age), over-rely on the required keywords, and overlooking skills they are unable to identify. With these two downfalls, it could overlook a possible incredible candidate. Candidates may feel like they are being evaluated solely based on their ability to meet certain technical criteria, rather than on their potential to contribute to the company culture and overall success.

Even though artificial intelligence is a big program being used in the hiring process many companies like Workday for example are still quickly reading through each resume because they know that artificial intelligence makes mistakes. On the other hand, “currently, there have been many businesses, Amazon to be specific, who have implemented trial runs integrating AI into the hiring process. What came of this is an extremely biased system that is completely reliant on patterns (Dastin, 2018).” The system used by Amazon exhibited a bias towards women and showed patterns of choosing male candidates.

As a female college student, this causes some concern because college students are beginning their careers and building their resumes, but with AI this could be a disadvantage. As stated above, AI programs filter out resumes with certain formats and required keywords including skills or experience, but for college students with little to no experience, it will make it much harder to get an interview even for an entry-level position. With gender bias, female college candidates will be pushed out of the system without being given a chance due to the system creating a bias towards women and running on a pattern. A solution for college students to have a chance with the AI programs is to tailor their resumes to specific job postings, using relevant keywords, and highlighting their achievements. In conclusion, the use of artificial intelligence being used in the hiring process for resume screening has its benefits, but it also has a fair share of disadvantages. Employers should work towards addressing bias and ensuring that each candidate is given a fair and equal opportunity.

Santa Clara University’s Markkula Center for Applied Ethics

https://www.scu.edu/ethics-spotlight/generative-ai-ethics/the-ethics-of-ai-applications-for-mental-health-care/

The Santa Clara University Ethics Center focuses on preparing individuals, particularly students, to make ethical decisions and develop ethical decision-making skills. They offer various fellowships and internships to help individuals build these skills. The center has evaluated its values, vision, and strategic priorities in the past year, and has welcomed new members to the team, including Dorothee Caminiti, who focuses on the ethical issues related to personalized medicine, and Sarah Cabral, who leads the business ethics internship. The center aims to build a more ethical future with the support of donors and partnerships.

 

An article published on their website discusses the concept of ethics and its importance in various aspects of life. Ethics involves standards and practices that guide our behavior, including in personal, professional, and societal contexts. The article also clarifies what ethics is not, including feelings, religion, following the law, following culturally accepted norms, or science, as these do not necessarily dictate what is ethical. Instead, ethics requires knowledge, skills, and habits to make informed decisions that align with high ethical standards.

 

Additionally, there are subtopics within the discussion of ethics in AI. Thomas Plante wrote an article about the implications of using artificial intelligence (AI) for mental health treatment. While there are many AI-based mental health applications available today, research is needed to determine their effectiveness, and ethical issues need to be addressed. First, engineers and computer scientists should work alongside licensed mental health professionals to ensure that their products and services are safe and effective. Second, mental health applications need to maintain strict confidentiality to protect user privacy. Third, while preliminary research suggests that AI-based mental health applications may be helpful for mild to moderate symptoms, they may not be appropriate for more severe symptoms or psychopathology. Despite these potential issues, the author notes that AI-based mental health applications may be a boon to treating more people in a more affordable and convenient way, particularly given the current mental illness epidemic. However, research is needed to ensure that these applications are based on solid empirical evidence and best clinical practices.



Santa Clara University

 

 

Overcoming the Technocratic Paradigm: AI Bias in Healthcare

Overcoming the Technocratic Paradigm: AI Bias in Healthcare by Bridgitte Chan.

We currently live in a world that is driven by the technocratic paradigm. This worldview highly values the use of technology for problem-solving and decision-making; everything around us is seen as a problem waiting to be solved through scientific knowledge and technological power. Although the technocratic paradigm has proven itself to be useful in the past, it contains many pitfalls. Most importantly, it perpetuates the false dichotomy between STEM and humanities. 

The technocratic paradigm ultimately devalues the humanities and disregards the mutually reinforcing relationship between the two fields. It promotes this notion that science and math are inherently more legitimate because they are objective truths, in comparison to humanities, which is considered to be more subjective and open to interpretation. This is wrong for two main reasons. First and foremost, humanities fields also rely on empirical evidence and call for rigorous analysis and research; just because the data interpreted is typically more culturally based, does not make it any less factual than math and science. Secondly, it ignores the broader social and ethical contexts that impact the field of STEM, this intrinsically dehumanizes decision-making and reduces people to mere data points. The technocratic paradigm treats the human body as a machine and uses the white, male form as its standard; this poses some very obvious problems. As we begin to think about the ways humanities have historically been ignored in the field of science, the harmful implications of doing so become resoundingly clear. 

I am passionate about implementing the humanities and arts into the field of STEM. In other words, I am an advocate for the conversion of STEM into STEAHM because it forces us to question the narratives and ethics behind the things we have been taught to accept as undeniable facts. This shift has become even more vital with the emergence of AI. Artificial intelligence has made significant advancements in recent years and as AI continues to grow at a rapid rate, so does its ability to impact the lives of every human on this planet. AI replicates the world as it exists and makes decisions based solely on mathematics, unaware of any ethical and moral nuances. It is evident that AI has exposed society’s current biases in how it performs when identifying human faces. Not only do facial recognition algorithms work better on male faces than female, but it sometimes fails to detect darker skin tones altogether and there are instances that go beyond this. For example, Amazon tried to use AI to speed up its hiring process and it automatically rejected women because technological jobs and positions of power have not been traditionally held by women. If this is how AI performs when trying to hire people, what could this mean for the future of AI within our medical system?

AI is a direct reflection of our world because it is based on the data it is fed. It would be naive to believe the current medical data and knowledge we have is devoid of misconceptions–predominantly derived from problematic Western worldviews. Whether it is a conscious decision or not, people inevitably embed their own biases into technology mainly because the data being used is largely skewed. Since our history is filled with inequalities, the data currently being used to train AI frequently excludes and can even work against BIPOC as well as other marginalized groups. When we look at the ways the American healthcare system has violently mistreated Black people, especially Black women, it is only logical to assume AI would worsen these disparities. If we do not take swift action to fix this, further disparities will be created and a new form of injustice, known as algorithmic injustice, will run rampant. 

It is crucial for us to illuminate the biases that exist within these algorithms so we can actively combat them and ensure that the future applications of AI are accurate, equitable, and ethical–especially in the realm of healthcare and medicine. AI has the potential to bring about a multitude of advantages and enhance the way we live, but there is a lot of work that needs to be done beforehand. It is time for developers to cultivate an improved version of AI that is inclusive and created with diverse populations in mind. This can only be done if we unite the fields of humanities and STEM. We must be willing to further our cultural competencies and embrace the ideologies more exclusively taught in humanities. We must be cognizant of who is developing the technology and whom, or what, they are developing AI for. Does it serve the public interest? Or is it being built for profit? Is it ethical? Are the people in charge aware of the social responsibilities that come with creating artificial intelligence? Asking these questions is the first step in achieving algorithmic justice and it is our duty to challenge and overcome the discriminatory practices brought about by the technocratic paradigm; only then can we truly reap the benefits of AI.