• 104 Posts
  • 84 Comments
Joined 1 year ago
cake
Cake day: December 11th, 2024

help-circle
  • I don’t think it’ll be LLMs (which is what a lot of people jump to when you mention “AI”), they have much higher latencies than microseconds. It will be AI of some sort, but probably won’t be considered AI due to the AI effect:

    The AI effect is the discounting of the behavior of an artificial intelligence program as not “real” intelligence.

    The author Pamela McCorduck writes: “It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘that’s not thinking’.”

    Researcher Rodney Brooks stated: “Every time we figure out a piece of it, it stops being magical; we say, ‘Oh, that’s just a computation.’”

    LLMs might be useful for researchers diving down a particular research/experiment rabbit hole.



  • The Bitter Lesson talks about speech recognition instead of synthesis, but I would guess that it’s a similar dynamic:

    In speech recognition, there was an early competition, sponsored by DARPA, in the 1970s. Entrants included a host of special methods that took advantage of human knowledge—knowledge of words, of phonemes, of the human vocal tract, etc. On the other side were newer methods that were more statistical in nature and did much more computation, based on hidden Markov models (HMMs). Again, the statistical methods won out over the human-knowledge-based methods. This led to a major change in all of natural language processing, gradually over decades, where statistics and computation came to dominate the field. The recent rise of deep learning in speech recognition is the most recent step in this consistent direction. Deep learning methods rely even less on human knowledge, and use even more computation, together with learning on huge training sets, to produce dramatically better speech recognition systems. As in the games, researchers always tried to make systems that worked the way the researchers thought their own minds worked—they tried to put that knowledge in their systems—but it proved ultimately counterproductive, and a colossal waste of researcher’s time, when, through Moore’s law, massive computation became available and a means was found to put it to good use.

    Also posted over in !discuss@discuss.online here, since I was reminded of the essay




  • IMO free will is commonly misunderstood. It’s not an absolute property, it’s a relative statement. In other words, something doesn’t “have” free will, the term is merely shorthand for “behavior that can’t be predicted”. To me, a rock doesn’t have free will because I can use relatively simple physics to predict its behavior perfectly. Other humans have much more free will because it’s much harder to predict their behavior. A bug is somewhere in the middle. To a superhuman intelligence (supercomputer, aliens, deity, take your pick), humans don’t have free will, because our behavior can be perfectly predicted.

    That squares with my opinion on QM in that even if deterministic interpretations of QM are eventually rigorously ruled out, I would still be of the opinion that if we could poke through the underlying substrate and query an intelligence there, our behavior would be perfectly predictable. Much like a video game character discovering the math behind the RNG that controls their universe. So they’re kind of orthogonal concepts, but somewhat related.




  • Sounds like a bubble, which isn’t a bad thing but not very common in the US IME. I haven’t looked at housing prices in that area, but I’m guessing they would be obscenely expensive for most people. Even where real estate is much cheaper than that, I’ve only really known a few people that have done that sort of thing and they’re all well-off.

    I’m also going to plug the community we’ve got over at !AskUSA@discuss.online, it’s great for casual US-focused questions like this.