Jun 9, 12:20 PM
Chomsky doesn’t believe that brain-machine interfaces will work.
Musk is worried that we’ll be killed by our AI’s.
Hawking is afraid that little green men will come here and get us.
Just because you’re an expert in one area, it doesn’t mean you know a goddamn thing about everything else.
two of those guys are experts in the area of their concerns
+Nina Tryggvason Uhhhh, no. None of those guys are experts in the areas of their concerns. That’s what concerns me.
+Brian Salter how expert does one have to be regarding aliens or People not managing science (which anyone who’s read science fiction understands)?
+Nina Tryggvason Well, these guys don’t even write sci-fy, so they have no expertise even with imagined scenarios.
Once out of their fields, they’re pretty much just blow-hards. (Except for Musk, who is always a blow-hard, and has no real expertise in anything.)
+Brian Salter which is more subjective wordage, and your inability to recognize other people’s expertise level reveals your lack of.
+Nina Tryggvason Hmmm, in a phrase: Bugger off, bitch.
Dear Brian: don’t dish it, when you can’t take it, eh
On April 20, Facebook announced that it was working on building a device that could turn your thoughts directly into texts without you speaking or typing.
A few days after that, Elon Musk announced a similar project that would let you turn your thoughts into text and connects the brain directly to the Internet.
While the challenges Facebook and Musk have talked about facing are mostly technological based on building better brain sensors, both must overcome our lack of understanding of the mind itself.
In order to read your thoughts, a machine would need to understand how you think, and exactly how language works.
To try and answer those questions, Inverse spoke with Noam Chomsky professor emeritus at MIT and the father of modern linguistics about the connection between language and thought, and the rising technology of brain computer interfaces.
Chomsky says that he thinks the tech is nearly impossible to create, and that if it can be built it is ultimately dangerous.Do you think it’s possible for a machine turn your thoughts into words?
It would first of all have to have a way of determining where our thoughts are. Nobody knows how to do that. You and I, for example, when we introspect into what we’re now doing, producing complex sentences, we can’t access the thinking that’s lying behind it. There’s no known technology that can do it, either.
What you can do, maybe, is finding ways to determine whether an organization of motor activity in the brain which can be detected could, say, move a level or something. That’s conceivable. Anything like finding out what our thoughts is is just beyond science fiction.You’ve talked before about how we don’t necessarily think in words. Is that what makes this so difficult?
The kind of thinking that we have introspective access to is typically in words, but I should say, there’s nothing much understood about this.
To take a classic example, when Alan Turing wrote his famous article about can machines think, he started off by saying that the question of whether machines can think is too meaningless to deserve discussion. What he said is, “The problem isn’t the notion of thinking. It’s just so vague that you can’t ask a serious question about it.” He said when he’s asked, “what is thinking,” about the only thing he can say is, “That kind of buzzing that goes on in my head.” That’s how much we understand about it.
A lot’s been learned, and there’s a little known about the neurophysiology of it, but it’s a very difficult topic. It’s one of the hardest questions in the sciences.
Why is it so hard?
There’s all kinds of stuff going on in the brain, but we don’t know how to tap it. Part of the reasons are ethical. We know a lot about the neurophysiology of vision, for example, but the reason is that we’ve done experiments on cats and monkeys who have about the same visual system we do. Whether rightly or wrongly, scientists do invasive experiments with other animals. You put an electrode into the brain and you can find out what a particular neuron is doing. Out of that, you can construct a lot of understanding of the way the brain is analyzing visual signals and so on.
You can’t do that in the case of language, because for one thing, there is no other animal that has anything like the same system, and we don’t permit studies of that kind with humans.
The technology that’s used to study the brain [in humans] is noninvasive, so it’s picking up electrical signals or studying what amounts to blood flow. That’s what fMRI does. That gives you some information about the locations in the brain where things are happening, but it doesn’t tell you very much about the nature of the actual processing that’s going on.
What if instead of trying to turn your thoughts into some representation of them, this actually just connected your mind to someone else’s mind?
We have something like that. It’s called language. Language enables you to express your thoughts in a form which I can understand. That’s an astonishing property unique to humans, and trying to figure out where it came from is a deep problem of science.
To take a concrete example, when we speak, the words have to come out in sequence. One word follows the last. There’s good reason to think that that linear ordering is not part of language. It’s just imposed by the specific sensory motor system, the articulatory system, that just requires that everything come out in a sequence. What’s internal to the mind and is yielding our thoughts probably doesn’t have sequence, but you can’t introspect into that.
Are there specific challenges to developing the technology you would need for that?
We don’t even have the kind of technology that will enable us to understand the most elementary computations of language, the simple things, like why one word follows another.
Take the sentence, “John is a man,” and compare it with, “John are a man.”
The first one is grammatical. The second one isn’t grammatical, but we barely have the beginning of technology that can even make distinctions like that. The technology doesn’t really know what’s happening.
The study of what’s going on in the brain is very hard, even how a memory is preserved. How do you remember what you saw yesterday or five minutes ago? Even that is barely understood.
Thinking about how much we actually understand both language and the brain, what do you think is a realistic next step?
A realistic question, and one that is being studied is: what properties of the human brain enable us to have the capacity to produce new expressions which express our thoughts and are intelligible to others, and to do it over in an infinite range?
Just see what parts of the brain are even involved in this. That’s a hard problem. If that problem is solved (and there are some ideas about it) then the next problem, which is much harder, is to determine how the brain is doing it. That’s way beyond anything we understand, even for much simpler things than language.
If it is eventually possible to read our thoughts, do we actually want to do this?
I think we should not want it, just as I do not want Facebook and Google and the NSA to have access to my activities. They do it, but not because I want them to. If they could have access to our thoughts, which is as I say beyond science fiction, that would be even worse. They shouldn’t even have the access they do have.
If it was real, I would think there should be efforts to end it, just as I think there should be efforts right now to prevent private corporations or the national government from obtaining detailed information about my actions and preferences. It’s none of their business.
oh you are so wrong on that. You need to get that ball out of your mouth and educate yourself on the subject. It’s 2017 and we have Google. Sharia demands that Islam be placed above all forms of government.
I hope you enjoy your learning discovery. Talmud 1.0 Bible 2.0 Koran 3.0… scribal copies of the Internet Dead Sea Scroll