social justice Archives - Nightingale | Nightingale | Nightingale The Journal of the Data Visualization Society Wed, 03 Nov 2021 16:51:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://i0.wp.com/nightingaledvs.com/wp-content/uploads/2021/05/Group-33-1.png?fit=29%2C32&ssl=1 social justice Archives - Nightingale | Nightingale | Nightingale 32 32 192620776 What Data Visualization and Analysis Taught One Activist about Airbnb’s Impact on Communities https://nightingaledvs.com/what-data-visualization-and-analysis-taught-one-activist-about-airbnbs-impact-on-communities/ Wed, 03 Nov 2021 13:06:30 +0000 https://dvsnightingstg.wpenginepowered.com/?p=8707 Activist and technologist Murray Cox has been described as a “Lone Data Whiz” who is Airbnb’s “Public Enemy No. 1 in New York.” These monikers..

The post What Data Visualization and Analysis Taught One Activist about Airbnb’s Impact on Communities appeared first on Nightingale.

]]>
Activist and technologist Murray Cox has been described as a “Lone Data Whiz” who is Airbnb’s “Public Enemy No. 1 in New York.” These monikers aim to personify the amount of data, maps, and reports Cox has collected and created to understand Airbnb’s impact on communities.

But Cox explains that his project, Inside Airbnb, was simply in response to two of his beliefs: first, that housing is a human right that has been commoditized, second, that data might further the conversation. 

“It was a civic project, my civic responsibility to contribute to the city,” Cox said of starting the project in NYC. “In the same way that other housing activists do their organizing work, I used my data skills to contribute [to] the city.”

He launched Inside Airbnb in 2014, after teaching a youth workshop on gentrification in New York City’s Bedford-Stuyvesant neighborhood using data analysis and visualization. He began scrutinizing Airbnb’s publicly released data. On his website, users can download the most recent year of data for free. The data he collects includes details about hosts, like the host’s self-reported location, the number of listings, as well as the price and room type of each listing. With Inside Airbnb, Cox visualized this data into dashboards for each city, using Mapbox Studio and OpenStreetMap.

Two years after the launch, Cox released the site’s first major report with fellow data activist and independent co-author Tom Slee, who collects data of his own, separate from Inside Airbnb’s. Cox and Slee analyzed data that Airbnb had publicly released about its presence in New York City in December 2015 — and found that the company had removed over 1,000 from November listings before releasing the data. 

All of the removed listings were for entire homes whose hosts were renting out multiple entire homes, as opposed to only one home, presumably their own residence. These listings are the biggest concern to activists like Cox who fear they are being operated by commercial hosts, not residents sharing their own homes. Airbnb then presented the November data as an average day of Airbnb’s operations in New York City, with only a small number of commercial hosts, which Cox and Slee found to be untrue.

The graphic from Cox and Slee’s 2016 report on how Airbnb removed public listings before publishing public data to shape a different picture of their operations.

Now seven years into the project, Inside Airbnb has expanded to 85 cities and regions in over 25 countries. The website includes an interactive map and summary of listings for each location, along with downloadable data. In a pre-pandemic world, Cox would travel to speak at conferences about his findings. He fields academic and journalistic requests almost daily. He even met with Airbnb in February 2019, where they discussed a proposal to regulate home sharing in New York.

Visuals from Cox’s 2020 report for The Left in the European Parliament show how the majority of Airbnb listings in several cities, like Paris, are for commercial use.

A bill requiring registration for short-term rentals has been discussed in the New York City Council, which held a hearing on it in September. Murray, who has been working on the legislation with the Coalition Against Illegal Hotels, gave testimony at that hearing.

“What really surprised me is how big [Inside Airbnb] grew and what impact it had,” he said. 

In June 2015, Inside Airbnb’s website only included eight locations, according to an archive of the site on the Wayback machine (top). Now, there are so many locations that users must scroll to see them all (bottom).

Today, Cox is grappling with the scalability of Inside Airbnb as an open-source data project. He considers his work first and foremost as housing activism and wants the data to be free for activists to address issues in their own cities.

Visualizations of Airbnb’s presence in Berlin from Cox’s 2020 report for The Left in the European Parliament.

But the project does come with expenses like proxy servers, cloud storage for data and data transfer costs for when people download his data — all of which comes to a few thousand dollars per month, Cox said. He said he generally doesn’t pay himself. To support the project and himself, he works part-time in product development and management, and infrastructure in the art technology space. 

“How do I release free data because that’s going to help activists? But at the same time, I have expenses that I have to pay,” he said.

Besides funding the project personally, he is requesting payment to access archived data that is more than a year old. He recently removed archived data from his site — because he found that some were simply scraping it on their own instead of contacting him to request it. Today, access to one city’s archived data costs $300 for academics or other institutions doing research related to Cox’s mission. Commercial researchers or academics studying a topic unrelated to Airbnb’s mission pay $500 for that data, according to a pricing guide on Inside Airbnb’s website.

Data from the last 12 months is still free for all to use. ​​​​Activists, journalists and residents, as well as governments doing work aligned to Inside Airbnb’s mission can request access to archival data for free. But some cities and regions, like San Francisco, New York City, and Vaud in Switzerland, also pay him a few hundred dollars per month to access the data. Cox has also created an advisory board of activists and researchers, which aids with finding potential collaborators and funding sources for Inside Airbnb.

Given these hurdles, Cox said he would advise data activists to think about how they can garner community and financial support to scale their project if needed. Finding communities of civic technologists, like Beta NYC, is one way to find like-minded people to join a project.

He added that working with traditional advocacy groups who aren’t involved with data is equally crucial, since it brings data activists closer to communities.

“It’s really important to ground the work with people that are already doing it and might have been doing it for a long time and might have a better appreciation or be closely connected to the community that are most impacted by the issues,” he said. “I think most technologists are not that well-connected to the issues or the communities impacted — or not doing their own activism.”

An infographic from Cox’s report for The Left in the European Parliament depicts his three regulatory recommendations: requiring Airbnb hosts to register with the city, as well as requiring platforms to only allow permitted people to list on their site and to regularly submit their active listings to the city.

Despite the time and cost of maintaining Inside Airbnb, Cox said he feels that the project is most “justified” when he sees cities use his data in the process of instituting housing regulations. He is also hopeful that more people in data fields will recognize that they can use their skills both for financial gain and societal improvement.

“A lot of people that have data skills are interested in projects just to make money,” he said. “I think there’s also an opportunity to use technology and data to help society.”

The post What Data Visualization and Analysis Taught One Activist about Airbnb’s Impact on Communities appeared first on Nightingale.

]]>
8707
Six Ways to Bring Empathy into your Data https://nightingaledvs.com/six-ways-to-bring-empathy-into-your-data/ Wed, 09 Jun 2021 13:00:12 +0000 https://dvsnightingstg.wpenginepowered.com/?p=3475&preview=true&preview_id=3475 One of the big challenges in visualizing data, and quantitative research in general, is helping readers connect with the content. Connecting directly with people and..

The post Six Ways to Bring Empathy into your Data appeared first on Nightingale.

]]>
One of the big challenges in visualizing data, and quantitative research in general, is helping readers connect with the content. Connecting directly with people and communities, and trying to better understand their lived experiences, can help content producers create visualizations and tell stories that better reflect the true experiences of different people. Our recent report on taking a racial equity awareness in how you and your organization work with and communicate your data and research focuses on this important aspect.

Embracing empathy in data and data visualization is a key dimension for people working with data to help put their work into the hands of policymakers, stakeholders, and community members who can use it to affect change. Inclusive and thoughtful data visualization that respectfully reflects the people and communities of focus can also help researchers build trust with those communities.

We think of empathy as it applies to communicating data across six main themes:

1. Put people first. First and foremost, we need to remember and communicate that the data shown reflect the lives and experiences of real people. Data communicators must help readers understand and recognize the people behind the data.

2. Use personal stories to help readers and users better connect with the material. Pairing data-driven charts with personal stories centered on individual experiences can help readers understand and identify with the people represented in the research and data visualizations. Techniques that can be used in tandem with data visualizations to help lift up personal stories include photography, illustrations, pull quotes, and oral histories.

3. Use a mix of quantitative and qualitative approaches to telling a story. Most charts and graphs are built on top of spreadsheets or databases of quantitative data. However, focusing on numbers alone without any context can overlook important aspects of a story including the “why” and the “how.”

4. Create a platform for engagement. This can take the form of interactivity in which users are able to manipulate buttons, sliders, tooltips, and other elements to make selections, filter the dataset, or create customized views of a chart. Such engagement can be leveraged as a way to allow users to find themselves in the data or discover the stories that most interest them. Another form of engagement is offering audiences a means of providing feedback about a data tool or visualization.

5. Consider how your framing of an issue can create a biased emotional response. Carefully consider how the data you visualize presents a particular perspective on the content. Take the examples ProPublica journalist Lena Groeger discusses in this post on different ways to visualize the impact of crime on local communities. Maps that show the locations of where crimes occurred versus maps that show the percentage of residents in a neighborhood who were in prisons are two different ways to visualize data related to the criminal justice system. What data we choose to focus on and what we choose to ignore can bias our audiences’ perceptions of the issues about which we are communicating.

6. Recognize the needs of your audience. Taking an empathetic view of the readers’ needs as they read or perceive information is an important step to better data communication. This kind of empathy can also be couched in terms of producing visualizations that are accessible by people with vision, physical, or intellectual impairments; reducing overly technical or jargon-laden language; and translating your work into languages most used by your target audiences.

Being empathetic to the people and communities of focus does not imply sacrificing the data and methods used in responsible, in-depth, sophisticated research. In fact, the opposite is true: high-quality research and empathy for people and communities can be complementary. Effective research necessarily means understanding someone else’s point of view nonjudgmentally and recording that perspective as accurately and truthfully as possible. Empathy underlies research and data visualizations that uphold diversity, equity, and inclusion, so data communicators should seek to find ways to help their audiences understand and connect with the people that the data represent.


 Read the full Do No Harm guide here.

The post Six Ways to Bring Empathy into your Data appeared first on Nightingale.

]]>
3475
Racial Bias in Code and Data: An Interview with Alex Garcia https://nightingaledvs.com/racial-bias-in-code-and-data-an-interview-with-alex-garcia/ Thu, 20 Feb 2020 09:00:25 +0000 https://dvsnightingstg.wpenginepowered.com/?p=4934&preview=true&preview_id=4934 As a young data journalist, I was advised to attend NICAR — an annual data journalism conference organized by Investigative Editors and Reporters and their suborganization, the National..

The post Racial Bias in Code and Data: An Interview with Alex Garcia appeared first on Nightingale.

]]>
As a young data journalist, I was advised to attend NICAR — an annual data journalism conference organized by Investigative Editors and Reporters and their suborganization, the National Institute for Computer-Assisted Reporting. In researching the conference, I stumbled upon recordings of the 2019 NICAR Lightning Talks, which are five-minute presentations related to data journalism chosen by a popular vote. Last year, Alex Garcia gave a talk called 5 ways to write racist code (with examples). I was able to chat with him last week about his talk, the response he received, and how he’s feeling about it a year later.

Emilia Ruzicka: Thank you so much for agreeing to meet with me! Can we start with an introduction?

Photo credit: Evangelina Rodriguez

Alex Garcia: Sure! My name is Alex. I recently graduated from University of California, San Diego (UCSD) with a major in computer engineering. I’m from Los Angeles, went to school down in San Diego. I’ve always been interested in computers and when I started at UCSD I decided, “Oh, computer engineering might be something kind of cool.” The first time I ever programmed or did anything in this field was when I started out in college.

I didn’t know about the data journalism field until about a year and a half, two years ago, and I found out through Reddit, Data is Beautiful, and I found all these New York Times articles and whatever else, so that’s how I got into it. I didn’t know too much about the actual field and NICAR until I saw someone randomly tweet about it. I saw it was going to be in Newport Beach and I was like, “Oh, that’s really cool!” In terms of my actual experience in journalism, I honestly have none. There’s student newspapers on campus and all that, but I never really got into that, never knew it was available. I did do a little bit of data stuff, but I just really didn’t know much about it.

So during NICAR I met a lot of really cool people, saw what the field was like, got really interested in it. I met someone who goes to UCSD and is interested in journalism. We were actually roommates for this past quarter, which was really cool. Right now, I just graduated in December. I have a couple of months off where I’m not doing too much. I’m going to start a new job at the end of March doing general software engineering stuff. In the future, I hope to get into some sort of newsroom, some kind of data journalism, later down the road.

ER: That’s a really interesting journey, where you started not knowing, entered computer science, and then by association and serendipity found data journalism. Speaking of, last year, you gave a lightning talk at NICAR. Could you talk about your topic?

AG: Yeah, so a little bit of background about that. It was specifically about racial bias in algorithms and racial bias in code. This is a field that at the time I was somewhat interested in because I’d see a tweet or an article here and there that someone wrote. I had friends from different fields who were taking classes and they’d say, “Hey, this is a cool article, why don’t you read it?” and it would be about courtroom justice and how these algorithms would determine whatever. So I was always tangentially interested in it. I always had the idea in the back of my mind that I should just aggregate all these links or stories that I find and have it in one list that people can go to and find. But I never did that because I just never got around to it.

So when I signed up for the conference and saw they had these lightning talks where you can do a few minute speech about whatever you want, having that idea in my mind, I thought I could either aggregate this list or do this talk. I was specifically excited to do a talk to journalists, too, because I don’t know how many reporters really know about this field. They may know tangentially — kind of like my knowledge of college sports and how students can get paid for playing; I know something about that field, but I don’t know much — so I thought it was the same in this case, where people may have heard stories about courtroom injustice or some Microsoft twitter bot that went crazy because people took it over, but they may not know the differences between what leads to those things. I thought if I aggregate all these things and show how diverse this field is, how these different problems arise, and what fields they appear in, it might be something nice to share.

I had a bunch of bookmarks to all these different stories I had, cobbled them together, threw a pitch in, and it was a lot of fun aggregating! I’m not the best public speaker and I’m not the best organizer for all these thoughts, so the night before I was frantically working on the slides. I had a lot of ideas about what I wanted to put in the talk, but since it’s only five minutes, I had to cut things out, cut things short, and move things around. But it was fun! It was definitely nerve-wracking, especially because I knew no one in the audience besides two or three people I had met during the days leading up to it.

ER: You touched on this a little bit, but what inspired your talk? Was there any particular article that you encountered that made you think you needed to do your talk on racial bias in code or was it more of the conglomerate idea that sparked it?

AG: That’s a great question. I think for general inspiration of the talk, it was just a bunch of different links that I saw and stories that I would find. Also, the general — not ignorance, per se — but how people don’t know that this is a problem or that it could exist. One of the things that I don’t think I mentioned in the talk specifically, but one of the links that I had was a Reddit thread about gerrymandering. There was some news article talking about gerrymandering and one of the top comments was, “Oh, this research team or this company is working on an algorithm that could do it automatically. They give it whatever and then the computer will do it, so there will be no bias at all.” A couple of comments after that they were saying, “Why are humans doing this? Computers could do it and it would have no bias.” And somewhere hidden in there, there was one comment saying, “Hey, that’s not really how that works. A computer could do it and it could still be biased and there’s many different ways that could come across.” So I think that thread, in particular, stuck out to me. I’ve seen similar threads since then, whether it’s just random regulatory items or other random stuff where people will say that if a computer could do something, it would be a lot easier or more fair.

There would also be other general conversations I would have with friends, not necessarily talking about whether it would be fair for computers to do something, but more about the actual impacts that these issues might have on people. I think there was also a tweet from Alexandria Ocasio-Cortez. She said something about how algorithms have bias and algorithms could be racist. And then there was a reporter from Daily Wire saying that code can’t be racist. So it’s just a lot of nit-picky things where I don’t know if people really understand this, how it works, and how it manifests.

A slide from Garcia’s talk about Alexandria Ocasio-Cortez and Ryan Saavedra of Daily Wire

Also, about a year before the talk, I took a small seminar in computer science education and the professor at UCSD was really interested in K-12 computer science education. Part of the stuff she would talk about and that I learned more about in future classes was the importance of knowing the fundamentals of computer science or programming. Not necessarily knowing how to program or whatever, but knowing how it works, the way it works, and what it can or can’t do. Think about the general US population and how many actually know, not how the computers work, but what their limits are. That’s a field that drives this conversation. You know, if people are ignorant or they don’t know that these computers are not unbiased, that can be a problem.

ER: You mentioned briefly how important you felt it was to present this to an audience of journalists. Could you talk more about that and any sort of considerations you made when you were giving your talk, knowing that your audience was journalists and the ethics that are inherently assumed when journalists present information?

AG: I remember one thing I was thinking about while I was making the presentation and noticed specifically at NICAR was that most of the journalists there are journalists first. They learned how to code while working on stories or doing their job. There are some people who are half-journalist and half-engineer and they know more about coding, but most of the audience seemed to be the kind of people who would take a Python or R workshop to learn about them for the first time. So I didn’t want to have anything that was too technical or show too much code. One thing I did to counteract that was to use a lot of headlines or stories by reporters who were in the field who know more about it and would be familiar to the audience. And while I did show some code, I made sure it wouldn’t be too complicated and would be easy to explain.

One of the points was about doing sentiment analysis and how if you use the wrong model and pass in a string like, “I like Italian food” you would have a higher sentiment than if you used “I like Mexican food.” So if I did show code, it was very simplified and probably something that people were somewhat used to.

A slide from Garcia’s talk about sentiment analysis

For the ethical implications, I’m not sure. I did my best to have sources or links that people could go to and follow, but what I didn’t talk about was how you can report on this or how you can find different agencies that may be meddling in this, mostly because I don’t know how to do that. I don’t have a journalism background, so I don’t know how you find sources or what’s the best, most ethical way to go about doing that. I kind of avoided doing that and said: “Here are some stories and headlines that all have something to do with each other and some reasons behind how one event led to another.”

ER: After you gave this talk, what was the response? Were people really interested and want to learn more? If so, has that response continued or have you seen a continued trend in the media in reports on stories like the ones you used in your talk?

AG: Right after the talk, I would get random Twitter DMs here and there from journalists saying, “this was really cool, I really liked it!” or “I had a small question about a source that you used.” One person wanted to talk about the realm in general — what companies are maybe more susceptible to this or that danger. Personally, it was a great way to meet people and see who is working in this field and who is interested in it.

In terms of long term what I’ve seen in the media since my talk, I think I’ve seen the field get a little bit worse. There’s a company that The Washington Post did an article about where you send them videos of job interviews and the company uses AI to see if they’re a good candidate by analyzing speech and body patterns. And it’s so problematic because there are just so many things that can go wrong, but seeing the amount of money and velocity and power that they have is pretty scary. That’s probably the biggest thing I’ve seen since the talk. I’ve probably seen a couple of other headlines because there’s more and more of a focus on this, especially an academic focus, but I can’t think of any off the top of my head.

A slide from Garcia’s talk depicting the racial and gender bias of facial recognition algorithms

ER: You mentioned earlier that you had a lot of things you wanted to put in your talk, but because of time constraints you couldn’t. If you had the opportunity to give the talk again without a time limit, what are some things you would have mentioned, both from when you were preparing the talk and from current issues of racial bias in code and data?

AG: I think for each of the five sections I had, there were one or two more articles I had, so I would have included those to make my points stronger. Also, I had this reach goal for the presentation when I wrote the slides to use a Javascript tool to make my slides a website. I wanted to run a machine learning algorithm during the presentation to show that you don’t need a big fancy server or computer to have the resources to make biased code. And at the end, I would be able to show that it was running on some NYPD stop and frisk data that I had and how biased the outcome could be with some pretty readily available tools and data. It’s not hard at all for this to happen.

I was trying to make it work, but the logistics weren’t working out and I didn’t want to cause too many difficulties, so I just went with regular slides instead, but I think having an example like that would drive the point home even further. Even the presentation you make for a talk has the power to make automatic, biased decisions for no good reason. I also would have liked to do demos of where things could go wrong, such as the sentiment analysis example I used, so that people could see exactly what was happening instead of just getting the theory. I think recreating that would reinforce my ideas.

ER: Cool! Is there anything else you want to say about the importance of being aware of racial bias in code and data or how people can become more conscious and evaluative of what they’re consuming?

AG: I think one heuristic that can be helpful in noticing when these things happen is watching for when someone says, “Oh yeah, a computer did that” or “a computer made the decision” or even “oh, that can’t be biased because of X, Y, or Z.” That’s something I feel happens a lot from day to day where something happened “automatically,” but for me, that’s a red flag. Those are things to look into a little more and check out how the decision was actually built.

With data visualization specifically, when you’re making these visualizations, it’s only as sound as the data you’re building on top of. If the data has underlying problems, then no matter what you put on top of it, you’re just going to make it worse. For instance, electoral maps. If you look at election results by county for the entire United States, you’re in some ways supporting an older, racist, white supremacist system. The goal might not be to create a racist visualization, but in some ways, you’re biasing the view and integrity of that data.

There are many other examples with data visualization and data analysis, but just knowing that whatever data you’re using, you’re sitting on top of a historical view of how it came to that point. I think that’s definitely something to consider as you work.

You can listen to Alex Garcia’s full lightning talk via IRE Radio here.

The final slide from Garcia’s talk

The post Racial Bias in Code and Data: An Interview with Alex Garcia appeared first on Nightingale.

]]>
4934
Mapping Mexico City’s Indigenous People https://nightingaledvs.com/mapping-mexico-citys-indigenous-people/ Wed, 04 Sep 2019 13:00:00 +0000 https://dvsnightingstg.wpenginepowered.com/?p=7879 “They aren’t statistics. They are human lives,” says Juan Luis Toledo Sanchez, member of the autonomous cooperative Cimarronez (or C.A.C.A.O., by its Spanish acronym), in..

The post Mapping Mexico City’s Indigenous People appeared first on Nightingale.

]]>
“They aren’t statistics. They are human lives,” says Juan Luis Toledo Sanchez, member of the autonomous cooperative Cimarronez (or C.A.C.A.O., by its Spanish acronym), in Mexico City. He is part of the team that, for five years, has been mapping the indigenous people in Mexico’s capital.

The cartographic map Pueblos originarios: el rostro oculto del ombligo de la luna (Indigenous People: The Hidden Face At The Center Of The Moon) made by the Cimarronez, shows the presence of the indigenous people in Mexico City, something that not even the State or non-profit organizations made before in the country, even though Mexico is one of the most diverse nations in the world in relation with its indigenous cultures.

They mapped the 2.1 million people who recognize themselves as part of the indigenous culture in Mexico City and its Metro Area — 7 percent of the 30 million people living there. For comparison, 2.9 million Native Americans live in the United States, just 1 percent of the 300 million people that live there.

Mexico City has 13 percent (446,000) of the people who speak the 68 indigenous languages in the whole country — that means that the capital is a region that should take notice of their presence, but that isn’t the case. Although being historically a Nahuatl region, people from other states have arrived and lived there, speaking their own languages: Zapoteco, Triqui, and Mixteco from Oaxaca, Maya from the Yucatan peninsula, Tzotzil and Tzental from Chiapas. Those are just some of the languages that have added to the Nahuatl and Spanish spoken in Mexico City.

Juan Luis Toledo says that “many investigations have a prejudice against the indigenous people because they show the indigenous people in terrible conditions with religious fanaticism or with exotic food and dress. We say no! We say the indigenous people exist beyond that.” He added, “the oppression and the history and geography that show only the ruling class and erases the indigenous.”

The map tries to be a humble effort as a tool to organize against power, according to the document written by Toledo Sanchez that accompanies it.

Close-up of Mexico City mapped. Those swarm-like dots are the indigeous people living there.
Detail of “Pueblos originarios: el rostro oculto del ombligo de la luna,” by Cimarronez. Those swarm-like dots are the indigenous people living there.

Many of Mexico City’s indigenous habitants are migrants. They leave their hometowns, in states like Oaxaca, Guerrero, Chiapas, or any of the other 31 states in the country, to find a better life. They also find racism and discrimination, both from the society and the authorities, says Toledo Sanchez.

The map is available online at cimarronez.org but only as a static image, not in high resolution or as an open archive. It also covers subjects related to what indigenous people live in Mexico City and its Metro Area: the rural areas, ancient urban centers and the conflicts for the land. It shows the social division between the poorer class (many of them indigenous) and the middle and high class. One of the main conflicts that the map shows is the relationship between the metropolitan area in Mexico’s center (it includes Mexico City, the state of Mexico and Hidalgo) covering an area of 7,815 square kilometers and Mexico City.

One of the main conflicts at Mexico’s capital is the relationship between the metropolitan area and the indigenous people. The document that accompanies the map shows how the metropolitan area and urban areas in Mexico are where the capital is supported through labor exploitation, dispossession of the land, and centralization of the ruling powers.

Mexico’s Valley Megalopolis, covering the capital and states that surround it. The red marks are the main urban centers.
Mexico’s Valley Megalopolis, covering the capital and states that surround it. The red marks are the main urban centers.

The public policies in Mexico City erase the indigenous people who migrate from other states, says Toledo Sanchez. Around 300,000 of the indigenous people in Mexico City and its Metro area aren’t counted on the census, because they work at the Central de Abastos (Mexico’s biggest market), or as stonemasons or cleaning houses, so they are omitted by their bosses. The map shows that there are 400 irregular encampments all over the Metro Area with at least 30,000 people living there, but they have also been left out from the census.

According to Cimarronez, because they are considered “outsiders” the politicians don’t consider their rights to water, services, health, education, land and social programs.

This map makes people conscious of that relationship and the historical presence of the indigenous people in Mexico City, even before it was named this or conquered by the Spaniards in 1521.

During the investigation, Cimarronez found Mexico had experienced millennia of migrant processes, even the foundation of Tenochtitlan in 1321 is one of them. Back then, there were mobility circuits in Mesoamerica, from Oaxaca or Maya territories.

“The Conquest of Tenochtitlan was more important than getting to the moon,” says Toledo Sanchez. He believes that to have cities in America was the most important thing for the Spanish crown and capitalism because that permitted the European countries to reproduce capitalism in the New World. Hence the map about the indigenous people. 

“We said: let’s think where are we, how many are we, who are we, as a need, and then this was the result, a map with 25 sections and 25 layers of information.”

The Diagnostic of the Mexico City Indigenous People, made by the local authorities, recognizes their historical presence. In the 1940s they represented just 1.2 percent of the inhabitants at the country’s capital. However, says Toledo Sanchez, the government established the concept of “outsiders, not from here,” to erase from the memory their presence. “We show that there are at least 1,500 years of historical presence of the indigenous people, and they aren’t ‘outsiders,’ because it’s a territory where they live.”

The post Mapping Mexico City’s Indigenous People appeared first on Nightingale.

]]>
7879