christian @curious_vii
driven by compression progress delphi.ai/christian Philadelphia, PA Joined October 2009-
Tweets4K
-
Followers2K
-
Following949
-
Likes7K
Very difficult to BS your way through a hard-hitting live oral exam vs. forms, traditional RFP responses, etc. Great way to vet expertise and save everyone time. Voice assistants are going to be huge for this, especially in a liquid global labor market.
Very difficult to BS your way through a hard-hitting live oral exam vs. forms, traditional RFP responses, etc. Great way to vet expertise and save everyone time. Voice assistants are going to be huge for this, especially in a liquid global labor market.
This may continue to be true, even after GPT-5. Why? 3x output vs. typing, it's less cognitively taxing (you'll have more energy every day), and the foundation models are fundamentally discursive: you actually want as much salient raw information and as possible, and then…
This may continue to be true, even after GPT-5. Why? 3x output vs. typing, it's less cognitively taxing (you'll have more energy every day), and the foundation models are fundamentally discursive: you actually want as much salient raw information and as possible, and then…
Compression begets expansion. You've got way more responsibility now, whether you want it or not. ("I can't" won't cut it because you literally can.) So, what will you choose to be responsible for?
Compression begets expansion. You've got way more responsibility now, whether you want it or not. ("I can't" won't cut it because you literally can.) So, what will you choose to be responsible for?
The fact that SOTA foundation models in 2024 were trained on such a relatively paltry volume of what has been (let alone what could be) digitized and still work as well as they do means, IMO, we're still quite a ways from the top of the s-curve (for learning / compression alone).
The fact that SOTA foundation models in 2024 were trained on such a relatively paltry volume of what has been (let alone what could be) digitized and still work as well as they do means, IMO, we're still quite a ways from the top of the s-curve (for learning / compression alone).
Retvrn to "just pick up the phone." Ways to go in re admin UX, but there's going to be a point at which it's easier to set up and deploy a bespoke audio assistant (that can handle both inbound and outbound) than schedule an email, and THAT is going to be a big deal.
Retvrn to "just pick up the phone." Ways to go in re admin UX, but there's going to be a point at which it's easier to set up and deploy a bespoke audio assistant (that can handle both inbound and outbound) than schedule an email, and THAT is going to be a big deal.
You need to be Loom-maxxing. You need to become good friends with Claude. You need to take cybersecurity seriously. You need to stop wasting time on calls and delegate context gathering to a voice assistant. You need to rigorously prepare for meetings by stuffing salient digital…
You need to be Loom-maxxing. You need to become good friends with Claude. You need to take cybersecurity seriously. You need to stop wasting time on calls and delegate context gathering to a voice assistant. You need to rigorously prepare for meetings by stuffing salient digital…
Asymmetric upside via engagements with significant at-risk compensation, forcing buyers to get clear on their longer-term goals (and order their desires) and sellers to get extremely good at handling exceptions with a charitable (vs. self-interested) orientation.
Asymmetric upside via engagements with significant at-risk compensation, forcing buyers to get clear on their longer-term goals (and order their desires) and sellers to get extremely good at handling exceptions with a charitable (vs. self-interested) orientation.
I suspect there's something to be gained by analyzing the ML community's obsession with benchmarks through the lens of mimetic theory. I don't think there's actually that much useful information in those standard evals vs. what you can gain by rapidly experimenting to solve real…
It's time.
AI as a means to reduce context switching costs is a really big deal. Means you can handle way more across an enormous problem space and still be productive.
Love really is the answer.
Also true on the buy side. Strive to purchase reduced uncertainty around a desired result vs. a reliable output per se.
Also true on the buy side. Strive to purchase reduced uncertainty around a desired result vs. a reliable output per se.
This is fully compatible with patience and kindness, but incompatible with complacency.
This is fully compatible with patience and kindness, but incompatible with complacency.