12 Comments
Dec 21, 2023Liked by Delip Rao

Hi Delip, There is so much here that I resonate with. These forces you are highlighting speak to a broader malady that will be our undoing if the tide does not turn. That is disconnection. Disconnection from ourselves, from each other, and from nature. It reminds me of an event I participated in a number of years ago now in the Valley on AI. In the first group discussion, the discussion lead asked "how do we accelerate our progress toward the singularity?" I about jumped out of my chair, shooting my hand up to interject with "I object to the premise of the question!" The techno-optimism on display back then was shocking to me, as two weeks before then I had been sitting with a single mom with 10 kids on the south side of Chicago to better understand her day-to-day struggles. The folks in that AI gathering might as well have been living on another planet. I know we've seen nothing yet in terms of the weirding to come. And I pray we come to our senses and change course before it's too late to overcome the ecological disasters that are well underway.

Expand full comment
author

Great to hear from you after all these years. I feel we are building the engines for the grandest exploitative systems ever and willingly handing over the keys to today's wealthy and powerful.

Expand full comment
Dec 21, 2023Liked by Delip Rao

Good to be in touch here Delip! I could not agree more. It's a profound time to be alive in human history. We're going to be in for an interesting ride no doubt. I'm glad your voice is here to help us all make sense of what is unfolding.

Expand full comment

> You know when you have some ideas brewing in your head, but you cannot immediately put them into words because the proper terminologies don’t exist, and you are so deep in the rabbit hole that anything you say comes out garbled, and it is too much work to climb out of the rabbit hole to give a guided tour. So you don’t bother saying it out loud, except maybe scribble something cryptic in your notebook?

This is so familiar to me, but I've never seen someone describe it so well. My PhD was basically from slightly before ELMo until now, so I feel like I understand the arc of development pretty well. I end up at a loss for words trying to communicate my mental model to others. It comes up often with AI doomer types. Bridging the gulf between our perspectives would require so much tacit knowledge, I can't imagine doing it successfully. Or maybe I'm the crazy one and all my hard to verbalize thoughts are incoherent.

Expand full comment
author

Thanks! Keep writing (even in private if you like), and the words will come at the right time.

Expand full comment

This was very astute. Couldn't agree more when it comes to the negative externalities. You're a talented writer. I'd encourage you to think about these concepts in a slightly more sinister/harmful environment. For example, in the U.S. we already used opaque scoring methods for in our criminal justice system that informs bail, convictions, and sentencing. While not getting a job because of a score is bad being sent to prison for longer than you otherwise would be is worse imo. If interested look up recidivism scoring.

Expand full comment
author

Thanks, Patrick. I agree there are many things more harmful than getting dinged on a job application. I picked that example because it's accessible to most. What are some good academic readings on recidivism scoring?

Expand full comment

This is a pretty concerning read an an undergrad who's really quite interested in the AI research scene. Do you think AI work necessitates the metric-chasing or is it possible to find a relevant position while maintaining your integrity?

Expand full comment

This is an excellent observation. I can't help but think that there's an arms race like that between the newt (which becomes more poisonous so the snake can't eat it without getting sick) and the snake (that gets better at digesting poison, but at other costs) that is going on. People who are good at political games suck at getting _useful_ things done (though they are, perhaps by definition, great at gaming metrics) and vice versa. In other words, there are people who do useful work and people who are good at selling others' work. The two sets of talents literally never coincide. And that's why the AI career game has become so (paradoxically?) alarmingly stupid, despite the ostensible intelligence of the people in the field.

At some point, the sellers-of-work realized they had all the power, and that the only thing that would happen if they asserted it is that... well, that human culture would deteriorate irreversibly, which wouldn't hurt their own salaries, so why not? Important decisions are made by people who know less than the square root of fuck-all when it comes to evaluating talent, so the whole system runs on these bullshit metrics that everyone now has to put up with; the game is about the h-index itself even though very little real work is getting done.

The old system was about optimizing oneself to appeal to untalented but highly-positioned humans--nobodies trusted to pick somebodies--and getting the resources you need by appealing to their biases. The new one is about optimizing oneself for search engines, appealing to AI agents of a nature that still isn't fully understood, and whatever metric (i10-index, h-index, whatever) is the careerist bukkake-of-the-month whenever one has the misfortune of needing a new job.

I feel bad for people who are in AI because they legitimately care about the field. There are a few tenured professors who are legit but, for the most part, the natives have been driven out by bullshitters and careerists who deliberately use bad practices (e.g., testing on training data, because if you have to publish a retraction, that's _two_ papers to your name) for short-term career gains... and it's a damn shame.

It is probably going to get worse before it gets better.

Expand full comment

Should Google delete Google Scholar if it does more harm than good and they don't maintain it?

Expand full comment
author

We have not proved all of Scholar (Google or Semantic) does more harm than good. For example, I like that the content is indexed and the convenience it offers. It would be nice not to stress the h-index or whatever.

Expand full comment

Probably. What I've learned over the years is that transparency becomes surveillance and the good people lose. Obfuscation sucks, and so does siloization, but it's the only thing that protects incomes from the cost-cutting psychopaths who want nothing but more reasons to deprive people of jobs and opportunities... which is why people in companies do it and why society needs them to do it.

The idea that everyone should have access to, say, a precise count of how many times each person's papers has been cited... seemed harmless, but turned toxic quick.

Until we have a UBI and are well on our way to socialism, humans can't have nice things like centralized knowledge repositories and transparency of historical information. Under the old systems, nobody knew anything and things kinda sorta worked. Under the new one, there's a lot of false knowledge (publication metrics) that really just disguises existing biases but that ruins people's lives.

Expand full comment