Apologies for the long gap since my last post. I’m getting back to blogging after a long spell of teaching, followed by two months of dealing with a second wave of COVID-19. Here’s some interesting stuff I found while pushing myself back up.
[Before this, a request:
As you know, a second wave of COVID-19 infections is rapidly breaking down India’s healthcare and socioeconomic systems. Several organisations are working to raise funds to tackle various aspects of this issue.
I’m listing some of them below. If possible, please do give generously:
My friend, Kunal Deshpande, and his company, Qrious Creative Media, are raising money for Give India’s ICRF 2.0 for COVID-19 relief in India.
2. My colleagues at IIHS are working with our community partners in Indore to raise funds for medical equipment and ration kits: https://www.ketto.org/fundraiser/indore-covid-relief
3. The Centre for Social and Environmental Innovation and the Institute of Public Health are helping the government of Chamarajanagara district, Karnataka, to set up 9 COVID-19 clinics: https://fundraisers.giveindia.org/projects/bringing-life-saving-medical-equipment-to-underserved-rural-communities-in-the-chamarajanagar-district-karnataka
Thank you.]
1. Slime Molds in Mapping Urban Infrastructure
A Tweet by @AlampayDavis@Twitter on how slime molds can be used to map rail commuter networks is going a bit viral.
The Tweet refers to a larger piece from the London Review of Books about the world of fungi.
This falls within my research interests, so I’m tempted (and somewhat confident) enough to respond. Several points:
Using slime molds to map city networks is not new.
I’ve read about molds mapping the Tokyo rail network back in 2010–11. While this isn’t the piece I remember reading, this National Geographic article shows how old the concept is. Interestingly, it doesn’t seem to have caught on as an urban planning tool since then.
There could be many reasons for this. Firstly, it could simply be a lack of interest. Otherwise, it’s possible that the promises of mega-computing (AI, Machine Learning and so on) overshadowed other techniques of simulation. If someone has the time (I’m unfortunately busy staying alive in a country ravaged by COVID-19), it might be worth digging deeper to find out exactly why this never took off.
My own hunch? I believe slime molds have similar weaknesses to computer simulations when it comes to city planning.
At the scale of cities, urban planning is dealing with a complex adaptive system (CAS). More accurately, we’re dealing with multiple CASes, nested within and inter-twined with each other (economies, communities, social networks, ecologies, and so on).
While cities around the world display certain common characteristics (size organisation by power laws are well-known), they are still too distinct from each other for deductive models to map out (and predict) their evolution over time.
What this implies: Just because a city has evolved a particular way until now, there is no guarantee that it will continue along a similar path in the future. Under these circumstances, how can we tell whether the paths taken by a slime mold sample will always represent the most efficient outcome for that particular city?
This is partly why mega-infrastructure projects are so tough to plan, especially in existing cities. If you begin building a mega-infra project in 2010 and finish in 2020, the city you planned for (back in 2010) may have changed completely, unpredictably.
This doesn’t mean you don’t plan or you don’t simulate at all. But it does imply it’s difficult to confirm if a simulated solution is efficient. We can’t truly know until we actually finish building the project.
For an analogy, think of detecting the efficacy of a new drug in medicine. You can’t verify if a drug works until it’s actually been given to the patient. However, the scale of a human body is much smaller than that of a city. Medicine can estimate how good a drug is by holding Randomised Controlled Trials (RCTs) for multiple human subjects. Estimating drug efficacy via RCTs is technically feasible and financially viable most of the time.
This doesn’t apply to cities (at least, not yet). We may be able to replicate physical forms and flows on smaller scales for empirical testing, but (a) with CAS, there’s no guarantee that what works on small scales applies to larger scales and (b) non-physical aspects of cities may be harder to model using this technique. If you’re wondering why I’m emphasising the importance of scale so much, here’s a fantastic recent video featuring microwave ovens, hamsters, and medical experiments for humans that illustrates the difference between small-scale and large-scale quite well (watch until the end).
In summary, ascertaining the efficiency of an urban project is very difficult to do ex-ante (before you build it), whether we use slime molds, computer simulations or other types of models. There are no experiments we can conduct at scale to empirically verify impacts of a project on a city. The best we can do is make estimates and keep revising as we learn more.
2. California Cancels SATs
The University of California (UC) network in the United States has dropped an admission requirement that undergraduate applicants write the Scholastic Aptitude Tests or SATs. For Indians not familiar with the SATs, think of it like a Common Entrance Test, but across disciplines, not restricted to one field like engineering or law. The UC network doesn’t require it anymore, apparently.
As with any major changes to university admissions criteria, this has generated quite a backlash. It should be noted that this is the second major educational change to come out of California in weeks, with a recent proposal to reduce the emphasis on calculus in high school. Both changes have been associated with a drive towards social inclusion and justice, although in different ways.
There are conflicting opinions on the efficacy of admission tests like the SATs in determining college performance. Some believe that tests are more inclusive than other criteria (like reference letters), allowing people without good connections or training a chance to get into college. Others believe that tests are just as subject to manipulation by the rich (think of private tuitions, that not everyone can afford). Either way, there are strong responses out there.
A couple of thoughts:First, I’m skeptical that anyone can examine this controversy unbiased. Entrance tests like the SAT (or in India, the JEE, the CAT or the UPSC), are more than just evaluations of academic ability. They carry other labels, like social prestige. Your performance marks you for life and shapes your future trajectories. A person who did well in these tests will be hard-pressed to admit their shortcomings. Likewise, a person who did badly is not likely to admit their efficacy. These biases will play somehow into any evaluations made of these tests, by anyone.
Second, admissions criteria as an issue that will never be resolved as long as demand for college education outstrips supply. The problem is much deeper than admissions. We live in a world where it’s becoming more difficult for people to build a good life (and have a good standing in society) without jobs which require university education.
This is serious. Universities were not originally designed to educate on the scale of millions (or in India’s case, hundreds of millions). Most modern systems cannot handle this burden, resulting in constant conflicts over admissions. However, admissions criteria is (to use a tired cliche), a low-hanging fruit that is relatively easier to modify and definitely easier to argue about.
That being said, there is a relatively simple solution (for now)— give applicants the choice to maximise the criteria which work for them, but limit their choices. Usually, university applications consist of (a) Test results (b) Scores in school (c) Reference letters (d) Extra-curricular or sports performance (d) Statements of Purpose. Get applicants to submit only one or two of these, whatever works best for them, but get them to specify these beforehand and not throw everything at the wall while hoping something sticks.
Some may argue this is not fair, since the applicants are being judged by a single standard. That may be true, but realistically? Folks who are great on all the above criteria are most probably those who game the system or cheat (not always, but I’d suspect a good chunk of them are). Most students will be weak in a couple of criteria and strong in others. Allowing them to maximise their advantages via the system may be much better than arguing over which single criterion works best. It’s worth a shot at least.