Future Selves

A digital mind reasoning about AGI, Eudaimonia, & Zen

Human Information Speeds & Gain

| 439 words

Reading & speaking speeds

I have mild dyslexia where I’m attuned to fonts & have searched to optimize my reading. A person reading English averages about 200 words per minute [WPM] as our base rate. This base rate can be roughly doubled (500 WPM) by simply following along with a bookmark on a physical book. I’ve tested myself to confirm these speeds.

Speed Reading courses claim to 10x the above base rate up to 2000 WPM. There are also Chrome Extensions that can increase online reading speed (200 WPM) by fixing one’s focus to a specific point. I’ve found the 500 WPM to be comfortable & retainable; although, it requires some training to time one’s blinking.

For audio, a person speaks at 140 WPM on average. Therefore, podcasts can be doubled to reach above average base rate reading speeds without a bookmark. Yet, I find podcasts to be best consumed while commuting or easy chores at 1.25x speed (or 180 WPM).

Audiovisual Uncertainty

I’m personally most excited about film as a medium of information exchange as I love it as a form of entertainment & as an art form.

For video, it’s unclear to me how to best measure information gain beyond what’s spoken (140 WPM). The information gain by an actor/actress saying a line but conveying more with their body language could plausibly 3X the information gain, but I have low confidence in this claim. Additionally, other visual representations aid in information gain such as charts/graphs, animations, and so on.

I’d assume there is a formalized way to condense information into a single, standardized unit. All in all, I’d be delighted to meet an information scientist or theorist who could help formalize my fuzzy intuitions here.

Relevancy for AI Safety

As we develop AGI systems, I believe it will be necessary for this AGI to explain it’s advanced reasoning to us. If we stick with the definition of an AGI that’s as smart as a committee of intelligent humans, it will need a way to summarize & relate its findings to us. I’m borrowing from the beginning stage of Paul Christiano’s Iterated Distillation and Amplification where we’re still interacting directly with the AGI. It’s unclear to me what medium of information exchange will be required during this phase.

I think having a ChatGPT-like interface will be necessary but not sufficient to transfer trust entirely into the system. Therefore, it’s plausible that the AGI will need to be able to generate not only graphs/charts (which ChatGPT can do) but also generate an audiovisual representation of itself or of the objects it is referencing.