Discriminating Data


Book Description

How big data and machine learning encode discrimination and create agitated clusters of comforting rage. In Discriminating Data, Wendy Hui Kyong Chun reveals how polarization is a goal—not an error—within big data and machine learning. These methods, she argues, encode segregation, eugenics, and identity politics through their default assumptions and conditions. Correlation, which grounds big data’s predictive potential, stems from twentieth-century eugenic attempts to “breed” a better future. Recommender systems foster angry clusters of sameness through homophily. Users are “trained” to become authentically predictable via a politics and technology of recognition. Machine learning and data analytics thus seek to disrupt the future by making disruption impossible. Chun, who has a background in systems design engineering as well as media studies and cultural theory, explains that although machine learning algorithms may not officially include race as a category, they embed whiteness as a default. Facial recognition technology, for example, relies on the faces of Hollywood celebrities and university undergraduates—groups not famous for their diversity. Homophily emerged as a concept to describe white U.S. resident attitudes to living in biracial yet segregated public housing. Predictive policing technology deploys models trained on studies of predominantly underserved neighborhoods. Trained on selected and often discriminatory or dirty data, these algorithms are only validated if they mirror this data. How can we release ourselves from the vice-like grip of discriminatory data? Chun calls for alternative algorithms, defaults, and interdisciplinary coalitions in order to desegregate networks and foster a more democratic big data.




Discriminating Data


Book Description

How big data and machine learning encode discrimination and create agitated clusters of comforting rage. In Discriminating Data, Wendy Hui Kyong Chun reveals how polarization is a goal—not an error—within big data and machine learning. These methods, she argues, encode segregation, eugenics, and identity politics through their default assumptions and conditions. Correlation, which grounds big data’s predictive potential, stems from twentieth-century eugenic attempts to “breed” a better future. Recommender systems foster angry clusters of sameness through homophily. Users are “trained” to become authentically predictable via a politics and technology of recognition. Machine learning and data analytics thus seek to disrupt the future by making disruption impossible. Chun, who has a background in systems design engineering as well as media studies and cultural theory, explains that although machine learning algorithms may not officially include race as a category, they embed whiteness as a default. Facial recognition technology, for example, relies on the faces of Hollywood celebrities and university undergraduates—groups not famous for their diversity. Homophily emerged as a concept to describe white U.S. resident attitudes to living in biracial yet segregated public housing. Predictive policing technology deploys models trained on studies of predominantly underserved neighborhoods. Trained on selected and often discriminatory or dirty data, these algorithms are only validated if they mirror this data. How can we release ourselves from the vice-like grip of discriminatory data? Chun calls for alternative algorithms, defaults, and interdisciplinary coalitions in order to desegregate networks and foster a more democratic big data.




Algorithms of Oppression


Book Description

Acknowledgments -- Introduction: the power of algorithms -- A society, searching -- Searching for Black girls -- Searching for people and communities -- Searching for protections from search engines -- The future of knowledge in the public -- The future of information culture -- Conclusion: algorithms of oppression -- Epilogue -- Notes -- Bibliography -- Index -- About the author




Measuring Racial Discrimination


Book Description

Many racial and ethnic groups in the United States, including blacks, Hispanics, Asians, American Indians, and others, have historically faced severe discriminationâ€"pervasive and open denial of civil, social, political, educational, and economic opportunities. Today, large differences among racial and ethnic groups continue to exist in employment, income and wealth, housing, education, criminal justice, health, and other areas. While many factors may contribute to such differences, their size and extent suggest that various forms of discriminatory treatment persist in U.S. society and serve to undercut the achievement of equal opportunity. Measuring Racial Discrimination considers the definition of race and racial discrimination, reviews the existing techniques used to measure racial discrimination, and identifies new tools and areas for future research. The book conducts a thorough evaluation of current methodologies for a wide range of circumstances in which racial discrimination may occur, and makes recommendations on how to better assess the presence and effects of discrimination.




Summary of Wendy Hui Kyong Chun's Discriminating Data


Book Description

Get the summary from Wendy Hui Kyong Chun's Discriminating Data #1 The Cambridge Analytica scandal showed how social media can be abused and manipulate elections. #2 Psychographics superseded demographics, geographics, and economics in terms of impact. It was determined that people’s personalities could be changed with rational, yet fear-based messages. #3 The claims made by Cambridge Analytica, and many other companies that use psychographic targeting, need to be taken with several grains of salt. Their efficacy has not yet been proven.




Pattern Discrimination


Book Description

How do “human” prejudices reemerge in algorithmic cultures allegedly devised to be blind to them? How do “human” prejudices reemerge in algorithmic cultures allegedly devised to be blind to them? To answer this question, this book investigates a fundamental axiom in computer science: pattern discrimination. By imposing identity on input data, in order to filter—that is, to discriminate—signals from noise, patterns become a highly political issue. Algorithmic identity politics reinstate old forms of social segregation, such as class, race, and gender, through defaults and paradigmatic assumptions about the homophilic nature of connection. Instead of providing a more “objective” basis of decision making, machine-learning algorithms deepen bias and further inscribe inequality into media. Yet pattern discrimination is an essential part of human—and nonhuman—cognition. Bringing together media thinkers and artists from the United States and Germany, this volume asks the urgent questions: How can we discriminate without being discriminatory? How can we filter information out of data without reinserting racist, sexist, and classist beliefs? How can we queer homophilic tendencies within digital cultures?




Discriminating Risk


Book Description

The U.S. home mortgage industry first formalized risk criteria in the 1920s and 1930s to determine which applicants should receive funds. Over the past eighty years, these formulae have become more sophisticated. Guy Stuart demonstrates that the very concepts on which lenders base their decisions reflect a set of social and political values about "who deserves what." Stuart examines the fine line between licit choice and illicit discrimination, arguing that lenders, while eradicating blatantly discriminatory practices, have ignored the racial and economic-class biases that remain encoded in their decision processes. He explains why African Americans and Latinos continue to be at a disadvantage in gaining access to loans: discrimination, he finds, results from the interaction between the way lenders make decisions and the way they shape the social structure of the mortgage and housing markets.Mortgage lenders, Stuart contends, are embedded in and shape a social context that can best be understood in terms of rules, networks, and the production of space. Stuart's history of lenders' risk criteria reveals that they were synthesized from rules of thumb, cultural norms, and untested theories. In addition, his interviews with real estate and lending professionals in the Chicago housing market show us how the criteria are implemented today. Drawing on census and Home Mortgage Disclosure Act data for quantitative support, Stuart concludes with concrete policy proposals that take into account the social structure in which lenders make decisions.




Programmed Inequality


Book Description

This “sobering tale of the real consequences of gender bias” explores how Britain lost its early dominance in computing by systematically discriminating against its most qualified workers: women (Harvard Magazine) In 1944, Britain led the world in electronic computing. By 1974, the British computer industry was all but extinct. What happened in the intervening thirty years holds lessons for all postindustrial superpowers. As Britain struggled to use technology to retain its global power, the nation’s inability to manage its technical labor force hobbled its transition into the information age. In Programmed Inequality, Mar Hicks explores the story of labor feminization and gendered technocracy that undercut British efforts to computerize. That failure sprang from the government’s systematic neglect of its largest trained technical workforce simply because they were women. Women were a hidden engine of growth in high technology from World War II to the 1960s. As computing experienced a gender flip, becoming male-identified in the 1960s and 1970s, labor problems grew into structural ones and gender discrimination caused the nation’s largest computer user—the civil service and sprawling public sector—to make decisions that were disastrous for the British computer industry and the nation as a whole. Drawing on recently opened government files, personal interviews, and the archives of major British computer companies, Programmed Inequality takes aim at the fiction of technological meritocracy. Hicks explains why, even today, possessing technical skill is not enough to ensure that women will rise to the top in science and technology fields. Programmed Inequality shows how the disappearance of women from the field had grave macroeconomic consequences for Britain, and why the United States risks repeating those errors in the twenty-first century.




Fundamentals of Clinical Data Science


Book Description

This open access book comprehensively covers the fundamentals of clinical data science, focusing on data collection, modelling and clinical applications. Topics covered in the first section on data collection include: data sources, data at scale (big data), data stewardship (FAIR data) and related privacy concerns. Aspects of predictive modelling using techniques such as classification, regression or clustering, and prediction model validation will be covered in the second section. The third section covers aspects of (mobile) clinical decision support systems, operational excellence and value-based healthcare. Fundamentals of Clinical Data Science is an essential resource for healthcare professionals and IT consultants intending to develop and refine their skills in personalized medicine, using solutions based on large datasets from electronic health records or telemonitoring programmes. The book’s promise is “no math, no code”and will explain the topics in a style that is optimized for a healthcare audience.




Race After Technology


Book Description

From everyday apps to complex algorithms, Ruha Benjamin cuts through tech-industry hype to understand how emerging technologies can reinforce White supremacy and deepen social inequity. Benjamin argues that automation, far from being a sinister story of racist programmers scheming on the dark web, has the potential to hide, speed up, and deepen discrimination while appearing neutral and even benevolent when compared to the racism of a previous era. Presenting the concept of the “New Jim Code,” she shows how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite. Moreover, she makes a compelling case for race itself as a kind of technology, designed to stratify and sanctify social injustice in the architecture of everyday life. This illuminating guide provides conceptual tools for decoding tech promises with sociologically informed skepticism. In doing so, it challenges us to question not only the technologies we are sold but also the ones we ourselves manufacture. Visit the book's free Discussion Guide here.