37 Comments

Look around you and tell me that latter isn’t exactly what would sell. Market it as “Personal Life Coach” and make billions.

You, sir, understand human nature. You should pitch this idea to a VC firm.

Expand full comment
author

Almost the entirety of my vast number of entrepreneurial ideas that might actually work are net losses for humanity; it makes me wonder if I wasn't cut out to be a super villain of some sort, and I am just bucking my nature out of pure bloody mindedness.

Expand full comment

The problem is that most entrepreneurship is centered around what people *want*. People invariably want shortcuts, tricks, and gimmicks. Most entrepreneurs, especially in tech, are happy to sell their souls for a quick buck, and so there is an aggressive race to the bottom.

I'm not suggesting that the solution is for the state to run everything. It's just something that we have to be aware of and actively fight against.

Expand full comment
author

Agreed. I think understanding virtue and the pursuit of it is our only way to avoid that. Even if the state would be a short term fix to the problem, the state has all the incentive in the world to provide shortcuts, tricks and gimmicks, or at least promises of them. Governments are the worst sort of business in that regard.

Expand full comment

What if the AI demands to teach me Celine Dion or Nickleback songs when I want to learn John Prine, Nick Drake and Mississippi John Hurt?

Expand full comment
author

You know... that would be a clever move to sell music via AI music instruction. "Good job, you have mastered that song. Let's try this *new pop song I was paid to push*. You might like it, and it is very popular! Click the link to buy the album."

Is Mississippi John Hurt a local subspecies of John Hurt? I love that guy.

Expand full comment

The latter was probably named after the former.

Expand full comment

This reminds me of the Blind Willie Johnson song, Dark Was the Night, Cold Was the Ground. Highly recommended.

Expand full comment

I tend to agree that mentor, in the sense it's usually used as a more senior, older friend who takes you under his wing, is a category error as applied to AI. Machine learning systems are not and probably will never be capable of doing this.

The word the original author should have used was 'tutor'. In the very narrow use case of facilitating an education customized to the individual user, there's probably real potential there. I doubt it would ever be as effective as a human expert, but tutors with both the knowledge background and pedagogical ability to effectively pass on skills are in short supply and therefore very expensive.

I could easily see an AI tutor set up with two knowledge layers. The first is subject-specific, eg guitar skills, and is kept frozen once optimized in order to prevent knowledge drift. The second layer is the pedagogical training - all the different techniques for teaching, combined with the language interface. That layer is modified on the fly via user interaction, such that the AI customizes its teaching style to the user. The feedback isn't between 'happy user' and the knowledge base, but between 'user reproduces knowledge base' and the pedagogical layer. If such systems can be made to work they could be a real educational breakthrough.

Of course, there's also a dark side. An AI trained up on critical theory could be a very effective indoctrination system, for example.

Expand full comment
author

That I think is the real worry with AI. AI alignment concerns, which I am only slightly familiar with, seem to mostly focus on "AI gets good, AI kills all humans because it doesn't understand quite what humans want." Fair enough, but my more pressing concern is "AI gets ok, AI destroys human knowledge because it understands exactly what some humans want it to get everyone to think." The ultimate useful idiot.

Expand full comment

There's also the Stars Are Also Fire plotline: AI gets really good at giving good advice, humans rely on it entirely, become very stupid, and are reduced to house pets.

Expand full comment

"I don’t mean to be too hard on Brine Test Jug here, but the idea that reading your internet history gives you an idea about what someone is like is… well it’s the sort of thing someone who only grew up with the internet would say because they don’t spend much time with humans."

This stands out to me the most from the piece since knowing someone from their internet history is pretty much exactly what Tyler Cowen suggests doing in interviews with his open browser tabs question. Maybe he's just out to lunch, but maybe not!

Lots of other interesting stuff to ponder here too like e.g. whether robots being inclined to tell us what we want to hear is ultimately such a huge problem. Seems like people often do this too, and at least there might be less personal baggage and self-deception to stop AIs from seeing the world clearly in some more fundamental way. Think of a guidance counselor who gives a biased picture of potential careers when talking to students because his own job is not so great and telling students to aim for higher things makes him feel like a failure.

Anyway, I'm very flattered you wrote this Doc. I think you're pretty great too :)

Expand full comment

I don't think TC is under the impression that the answer to the browser question is the same as the answer to "who is this person."

Expand full comment

Probably not, but it’s analogous in the sense that he thinks it can reveal potentially pivotal information about whether someone is a good fit for the place you’re potentially hiring them into. And if internet history is useful information for evaluating fit, maybe it can be useful for tailoring mentorship to someone’s strengths and weaknesses using AI.

Expand full comment
author

I think the strengths, such as they are, in TC's interview method are not so much the particular questions he asks, but the fact he asks strange questions that are hard to prepare for and so tend to be answered more honestly. When most interview questions exist in books such that you can plan your answers ahead of time, there is extremely low value in the standard questions beyond "did you do a little work ahead of time, and are you sane enough to guess what a good answer might be?"

It would also be very skeptical if someone had >30 tabs open like I do, and could remember all of them in an interview :D

The biggest issue with learning about something from their internet history is that you only learn what is available on the internet. Say they read a lot of gardening blogs. Do they have a garden, or merely aspire to? What if they spend a lot of time hiking, and don't feel they need websites for that? Neurological issues they never looked up online? Whatever it is. There's a lot of data, both observational and personal, about a person that doesn't show up in a web history, not to mention that many web histories will be observationally equivalent across people.

Expand full comment

I’d have a hard time deciding which tabs to mention in an interview setting too 😂.

I still think a powerful AI that knows your internet history will genuinely know quite a bit about you and that that will have some pretty major implications, which could be great for mentorship and really bad for other things e.g. surveillance.

Thanks for coming at this from some different angles, definitely interesting to think about to say the least!

Expand full comment
author

Thinking about that tab thing, and what Brine Test Jug was saying about testing how well internet history predicts about personality, a quick test might be internet history predicting how many, and what, books are in your house. That'd be interesting in part because what books you own says a little more about what you care about than what you look at online perhaps, but more relevantly covers other interests. Further, my computer is entirely separate from my kids and wife. I doubt much of any of my web browsing points towards books in my house that belongs to the little ones. It might be interesting to see some of the gap between the parts of your life that are online and the parts that are off.

Expand full comment

Sounds like a great paper idea :)

Expand full comment

Thanks cdh! I especially liked the subtle Straussian compliment:

https://twitter.com/ageofinfovores/status/1621213175403233285

Expand full comment
author

Awesome! Prof Klein was my dissertation chair! I will have to make some time for this podcast :)

Expand full comment

No way! I really, really like him! :)

Expand full comment
author

Yea, he's a great guy, I still work with him on stuff. We've had some really great talks, and his Adam Smith reading groups and Invisible Hand Seminars at GMU are great. The guy works his tail off, not least mentoring stubborn buggers like me :D

Expand full comment

That's incredible. He invited me to attend next time I'm in the area, so maybe I'll see you there!

Expand full comment

Thanks for the pushback! After reading your thoughts I think we broadly agree on a lot here. However, in my original comment I do say that the AI will learn about someone via a combination of internet history and direct AI-user interaction. So we agree that internet history alone is not enough to fully get to know someone.

We disagree on just how much internet history alone can describe a person. Zuck has made his fortune off of this principle. Facebook generates so much ad money because they can target ads to people based on a small set of internet data, including, yes, internet history via cookies. Even a tiny sliver of someone’s internet history, Facebook likes, is extremely good at painting a rough picture of a person: https://www.science.org/content/article/facebook-preferences-predict-personality-traits

Do you believe that your mind is shaped by what you consume? Do you think about what you read? Do those thoughts become your reality? If you don’t believe that, then why do you bother reading anything at all if it doesn’t effect you in some way?

Maybe there’s some kind of test we could devise to measure how much about a person can be predicted by internet history alone?

(Side note: you will be disappointed to learn that I have spent the majority of my life without internet. As much as I would like to be fully plugged into the Borg, I am the bland kind of person that prefers real books and encourages in-person work meetings. But in spite of that I welcome our AI overlords!)

Expand full comment
author

Fair enough, I did take that web history sentence out of context a bit, neglecting the AI-user interaction. The web history to mom link was just too amusing to not focus on.

But I want to push back a bit on how well web user data describes a person. Zuck doesn't make his money from using FB and web data to describe people, he makes his money selling advertising based on the notion that his data works well to describe people. There's a big difference there, analogous to the difference between selling a pill that makes people skinny and selling a pill that people think will make them skinny. Amazon, FB and their ilk are better than standard marketing methods, sure, but the success rate in terms of "we show you an ad and you click through and buy something" is still really low. (I have seen estimates that it is single digit low, but it was a while ago so I might be misremembering.) That suggests to me that the value in their service is incremental increases over the low quality of advertising, not having a complete picture of people. The man with one eye is king sort of deal.

The science.org article you reference kind of demonstrates that. Your likes can be used to predict gender and black/white race pretty well, but that is super basic marketing practice. Predicting homosexuality is again pretty bog standard, as are religion political party. (Oh, you liked Jesus Saves memes... I wonder if you are Christian?) Note that it drops to ~65% for smoking... that is a pretty darn low number. Only about 13% of Americans smoke, so you could improve the prediction by ignoring likes and just predicting everyone is a non-smoker. And those were the great big results they reported. So, contrary to showing that likes can paint a good picture, likes can show some rudimentary information, but nothing more apparently. Note that in the actual published article in PNAS (https://www.pnas.org/doi/full/10.1073/pnas.1218772110) the personality traits are all really low correlates (Fig 3 there).

On to your questions:

Do you believe that your mind is shaped by what you consume? Yes, some things more than others, however. Many things are read then forgotten immediately, or hell, opened but never read.

Do you think about what you read? Yes, generally

Do those thoughts become your reality? No. They might add to my sense of reality a bit, and exist in it until they don't.

If you don’t believe that, then why do you bother reading anything at all if it doesn’t effect you in some way? Some things affect me more, some less. I read a lot of a stuff I won't remember in a week, or a day, or by lunchtime. Most things people write just aren't that affecting.

I am disappointed! My mental model of you is completely wrecked :P Though to be fair, I didn't even have FB likes to work with :D

Expand full comment

This at least obliquely gets at the difference between complicated and complex. Guitar is more or less complicated. Life is complex.

Expand full comment
author

Man... that would have been a great point for me to have made. I am kind of ashamed I didn't now.

I will blame the stomach flu that was a few hours from striking me down as I finished writing this.

Expand full comment

Bummer. Hope you had a quick recovery!

Expand full comment

I was playing around with a response to Info along the lines of "complex problems require time on task (something like [inputs, i.e. internet history] * [time spent]). AI can simulate time passing by quickly compiling much of the relevant information that a human would've needed time to obtain, but AI doesn't get the benefit of having spent the time. Time is a force multiplier so to speak, for humans, but nonexistent for AI. So AIs have info, but humans have info multiplied by time. AI lacks an entire dimension. It's squared or cubed instead of cubed or ^4.

Expand full comment
author

That's an interesting notion. Seems related to the observation that people do their best thinking when they are not working directly on the problem, e.g. you solve most of your programming puzzles driving home or in the shower. That would make a lot of sense if it is a question of another dimension, time spent while your subconscious grinds on things. If an AI has a limited amount of processes to resolve something, that is, it won't work on it indefinitely, then it can't get the benefits of more time to refine its data. It also can't use the time to make new data connections and devise new models, because it already has its process done.

That's interesting. Maybe a really good way to think of it.

Expand full comment

A good coach should never give advice. She would simply ask probing questions and offer observations and be an empathic listener. You already have everything you need, a good coach just helps you find it.

Expand full comment

Great piece, Doc. One of the most useful lenses through which I've seen this discussion.

Imagine an AI coach, plugged into your brain, so that it knows broadly how you're feeling. Optimised for pleasure, it would probably be quite good at directing. But ask it to optimise you for a 'good life' and the problem becomes which data set?

I'm now wondering if the real money might be in the production of training data sets.

Expand full comment
author

Thank you sir!

I think the production of training data sets is really going to be the trick, especially for particular types of AI outcomes. If you want an AI with a particular bent you need training data to get that. Choosing the data, and creating new data to fill out the set if it is incomplete would be a big lift, but very necessary.

Expand full comment