The Blue Brain

The Blue Brain

Science

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.

Illinois

Joined
20 Mar 07
Moves
6804
30 Oct 08

This is probably the most fascinating article I've read this year:

http://www.seedmagazine.com/news/2008/03/out_of_the_blue.php

__________

Here are a few excerpts:

"In fact, the model is so successful that its biggest restrictions are now technological. "We have already shown that the model can scale up," Markram says. "What is holding us back now are the computers." The numbers speak for themselves. Markram estimates that in order to accurately simulate the trillion synapses in the human brain, you'd need to be able to process about 500 petabytes of data (peta being a million billion, or 10 to the fifteenth power). That's about 200 times more information than is stored on all of Google's servers. (Given current technology, a machine capable of such power would be the size of several football fields.) Energy consumption is another huge problem. The human brain requires about 25 watts of electricity to operate. Markram estimates that simulating the brain on a supercomputer with existing microchips would generate an annual electrical bill of about $3 billion . But if computing speeds continue to develop at their current exponential pace, and energy efficiency improves, Markram believes that he'll be able to model a complete human brain on a single machine in ten years or less."

...

"There is nothing inherently mysterious about the mind or anything it makes," Markram says. "Consciousness is just a massive amount of information being exchanged by trillions of brain cells. If you can precisely model that information, then I don't know why you wouldn't be able to generate a conscious mind." At moments like this, Markram takes on the deflating air of a magician exposing his own magic tricks. He seems to relish the idea of "debunking consciousness," showing that it's no more metaphysical than any other property of the mind. Consciousness is a binary code; the self is a loop of electricity. A ghost will emerge from the machine once the machine is built right."

...

"Niels Bohr once declared that the opposite of a profound truth is also a profound truth. This is the charmed predicament of the Blue Brain project. If the simulation is successful, if it can turn a stack of silicon microchips into a sentient being, then the epic problem of consciousness will have been solved. The soul will be stripped of its secrets; the mind will lose its mystery. However, if the project fails—if the software never generates a sense of self, or manages to solve the paradox of experience—then neuroscience may be forced to confront its stark limitations. Knowing everything about the brain will not be enough. The supercomputer will still be a mere machine. Nothing will have emerged from all of the information. We will remain what can't be known."

_________


Please share any thoughts or impressions...

AH

Joined
26 May 08
Moves
2120
30 Oct 08
1 edit

Originally posted by epiphinehas
This is probably the most fascinating article I've read this year:

http://www.seedmagazine.com/news/2008/03/out_of_the_blue.php

__________

Here are a few excerpts:

"In fact, the model is so successful that its biggest restrictions are now technological. "We have already shown that the model can scale up," Markram says. "What is holding us bac be known."

_________


Please share any thoughts or impressions...
…However, if the project fails—if the software never generates a sense of self, or manages to solve the paradox of experience—then neuroscience may be forced to confront its stark limitations. Knowing everything about the brain will not be enough. The supercomputer will still be a mere machine.....…

That conclusion wouldn’t follow from that premise so I find it odd that they should say that or think that -if the project fails, how would they know that it failed simply because the computers failed to accurately simulate ALL the physical processes in the brain? -after all, we don’t yet know for sure what all those processes are in sufficient detail.

P
Upward Spiral

Halfway

Joined
02 Aug 04
Moves
8702
30 Oct 08

Very interesting. Thanks, epiphinehas! 🙂

Cape Town

Joined
14 Apr 05
Moves
52945
30 Oct 08

Originally posted by epiphinehas
"Niels Bohr once declared that the opposite of a profound truth is also a profound truth. This is the charmed predicament of the Blue Brain project. If the simulation is successful, if it can turn a stack of silicon microchips into a sentient being, then the epic problem of consciousness will have been solved. The soul will be stripped of its secrets; th ...[text shortened]... . Nothing will have emerged from all of the information. We will remain what can't be known."
A terribly poor conclusion.
1. I disagree that the soul has secrets or the mind mystery. We already know roughly how it all works and we learn more all the time, just as we do with everything else in the universe. There is nothing particularly magical about the brain.
2. The failure of the project will only show the failure of the project - it will not prove anything about the brain except that it doesn't function exactly the way the experimenters think it does.
3. The statement "We will remain what can't be known." is simply ridiculous. Failure to find out how something works does not in any way mean we cannot find out how it works. If we took that attitude to everything we would never discover anything.

There are a number of problems with the experiment, the main one being that our brains develop over time with experience etc and simulating that whole process will probably not be possible in the next ten years. I think it will take much longer than that to figure it all out. Even if we can simulate neurons, we might have to simulate sight and sound inputs and let the system grow and lean over time, though I guess even a week old baby is clearly conscious and enough to tell us whether the system is achieving anything.

K

Germany

Joined
27 Oct 08
Moves
3118
30 Oct 08

Something nonexistent/irrelevant like the soul can hardly have mysteries.

F

Joined
11 Nov 05
Moves
43938
30 Oct 08

Originally posted by epiphinehas
" ... if the software never generates a sense of self ..."
How do you test if a bunch of silicon chips has an awareness, a sense of self? Do you just ask it?

If you ask a child of two years "Do you have a sense of self?" Can you rely on its answer?

P
Upward Spiral

Halfway

Joined
02 Aug 04
Moves
8702
30 Oct 08

Originally posted by FabianFnas
How do you test if a bunch of silicon chips has an awareness, a sense of self? Do you just ask it?

If you ask a child of two years "Do you have a sense of self?" Can you rely on its answer?
Think about it for a minute.

P
Bananarama

False berry

Joined
14 Feb 04
Moves
28719
30 Oct 08

I just finished reading "I Am A Strange Loop" by Douglas Hofstadter, the same author who wrote "Godel, Escher, Bach" (which I haven't read yet). His new book is more explicitly a discussion of what consciousness is and how it's generated, and it's fascinating.

Hofstadter's idea is that consciousness is the result of a "strange loop", not a simple circuit but one in which as you progress from one level of complexity to another, even though as you are continuously climbing "upward" you end up right where you started! It has to do with the self-referential properties of any sufficiently complex and powerful representation system. It's similar to what this project will demonstrate, although I think the wording used by the scientist, that consciousness is just the sum of the information being passed between neurons, misses the key part of consciousness in that perception coupled with a powerful self-referential categorization system gives rise to the strange sensation of "I"ness.

Even though it's still just the neurons firing and such, it's not the pieces but the structure that matters.

Can't wait to hear if this ever gets built!!

R
Standard memberRemoved

Joined
10 Dec 06
Moves
8528
31 Oct 08

Originally posted by epiphinehas
This is probably the most fascinating article I've read this year:

http://www.seedmagazine.com/news/2008/03/out_of_the_blue.php

__________

Here are a few excerpts:

"In fact, the model is so successful that its biggest restrictions are now technological. "We have already shown that the model can scale up," Markram says. "What is holding us bac ...[text shortened]... be known."

_________


Please share any thoughts or impressions...
Aslo,

I have heard that the power of computing will not contine you exponentially increase, because there is a limit on size of silicon chips, and we are at the threshold right now.

R
Standard memberRemoved

Joined
10 Dec 06
Moves
8528
01 Nov 08

Originally posted by PBE6
I just finished reading "I Am A Strange Loop" by Douglas Hofstadter, the same author who wrote "Godel, Escher, Bach" (which I haven't read yet). His new book is more explicitly a discussion of what consciousness is and how it's generated, and it's fascinating.

Hofstadter's idea is that consciousness is the result of a "strange loop", not a simple circuit bu ...[text shortened]... ut the structure that matters.

Can't wait to hear if this ever gets built!!
and what is even stranger is when one "I"ness, become aware of its "I"ness.

F

Joined
11 Nov 05
Moves
43938
01 Nov 08

Originally posted by Palynka
Think about it for a minute.
Now you have thought even more than a day and the question remain:
How do you test if a bunch of silicon chips has an awareness, a sense of self? Do you just ask it?

10 INPUT X$
20 IF X$ = "DO YOU HAVE AWARENESS?" THEN PRINT "YES"
30 END

Everyone knows that the above program will tell you that it has awareness. But can you trust it?

How will you know for sure that a heap of silicon and a bunch of program lines has awareness?

J

Joined
21 Nov 07
Moves
4689
19 Nov 08
1 edit

This is a very interesting topic. Like Schmo pointed out, we're close to
the limits of how small and powerful our current computer technology can
be. But other forms of computerised systems are being experimented
with, like organic computers. The brain is organic, so it makes sense
that organic technology would some day be able to fully reach the
capacity of the human brain. In fact, I'm convinced that it eventually will.

One question I have is: how does the complexity of the “machine”
affect the reliability of its operation? Obviously, the more involved
something gets, the higher the risk for errors. And sometimes, I suspect
that errors may even be the very result of self-awareness.

Take the human brain for instance. Even in all its marvel, it's full of
flaws. For one thing, our memories (though our brain has a huge
capacity for storing them) is often (if not always) factually inaccurate.
There are scientists who suggest that our memory is not really a
memory in the computer sense of the word, but a means to store only
the information relevant to us so that we can better prepare for future
events. For example, if you experience something uncomfortable, you
will want to avoid it in the future. It makes sense that you would
remember that experience, even if you don't fully understand exactly
what it was about it that made you uncomfortable. What's more, the
next time you experience discomfort in a similar situation, your mind will
attempt to "connect the dots"; find the similarities between the
two. In doing so, and for every time you experience similar situations,
you will gradually lose your memories as separate, clearly defined
events
, and instead they will blur into one distorted memory for the
purpose of helping you detect and prevent future experiences of
discomfort of similar nature. This works in favourable situations too, of
course. We will remember things that made us feel good, keeping only
the information that will help us realise how we can affect our
surrounding so that we can experience them again.

These inaccuracies that comes from storing only parts of the
information in order to handle future situations, can also be the cause of
us coming to the wrong conclusions. Though it would seem logical to us
that whatever truths we derive from our insight, it can only be as true as
the accuracy of the memories and knowledge that make up our very
insight into a given subject.

I wonder if this whole “manipulation of data” to achieve goals that are
beneficial to the organism of which the brain is a part, is not in fact the
self-awareness that we seem to think is such a wonderful and magical
thing? It is without question useful to us. Imagine that we lacked
this ability to recognise, filter out and analyse similarly, often occurring
events around us. We would be inert. Unable to adapt to the world
around us, and unable to even attempt to change it to suite our own
needs. (Which brings me to another interesting topic, far more
interesting than mere self-awareness, and that's group mentality - a sort
of extended self-awareness. But maybe that's a topic for another
thread.)


Now, my original question: "How does the complexity of a machine affect
the reliability of its operation?", becomes: "How does self-awareness
affect the reliability of the information it operates on?" Obviously, if
self-awareness is a means for self-preservation, then if a computer were
to gain self-awareness, it would make sense for it to manipulate data in
much the same way as the human brain does. Maybe it would reach a
higher level of self-awareness that we can't even begin to comprehend
(in which case it's more or less useless to us), or it may stay on a very
basic level (like a toad or something, who can for certain adapt to its
surrounding, but aren't capable of the kind of group awareness that we
humans are capable of - and which I personally believe is crucial in
acquiring the skills necessary to develop useful tools and start affecting
the surrounding environment to our own needs). In any case, I'm not so
sure that we want a computer to be truly self-aware, but we want it to be
able to perform complex calculations, fast, efficiently and accurately to
complement our own skill for deeper analysis and intuitive
understanding. So, while I think it would definitely be desirable for the
computer to be able to use and analyse information in a sense that
would indicate some form of intelligence (like an expert system), I really
wouldn't want it to start manipulating data “unconsciously” and prepare
for its own survival and (science forbid) perpetuation.

Having written all that, and shamelessly leaving it there for your
discomfort (it took me almost an hour to finish, so if I suffered writing it,
you'll damn well suffer reading it), as a programmer I realise that
there's a simple solution to the problem (assuming I'm correct about the
nature of self-awareness which I may most likely be not). The AI could
in theory be given an infinite amount of memory (for all intents and
purposes) and thus wouldn't have to store only relevant information. It
could store detailed data gathered from its sensory nodes, but in a
separate storage maintain the bits and pieces required by the
self-awareness module of the programming. It would use the detailed
storage whenever it realises that it's drawn the wrong conclusions, and
use the separate storage when it needs to come up with quick answers
based on experience.

s
Fast and Curious

slatington, pa, usa

Joined
28 Dec 04
Moves
53227
20 Nov 08

Originally posted by joe shmo
Aslo,

I have heard that the power of computing will not contine you exponentially increase, because there is a limit on size of silicon chips, and we are at the threshold right now.
The power of the semiconductor world is amazing, breaking size barriers that were supposedly sacrosinct and when we reach that limit for real, maybe in twenty years, there will be other technologies like carbon nanotubes that can take things even smaller, so it may come about that the circuit elements that would simulate a full human brain may have individual elements smaller than the neurons we are attempting to clone in an electronic environment.

w

Joined
02 Jan 06
Moves
12857
21 Nov 08

Originally posted by FabianFnas
How do you test if a bunch of silicon chips has an awareness, a sense of self? Do you just ask it?

If you ask a child of two years "Do you have a sense of self?" Can you rely on its answer?
If it asks you, "How about a nice game of chess", in a calm and reassuring creepy voice, run for the hills!!

F

Joined
11 Nov 05
Moves
43938
21 Nov 08
1 edit

Originally posted by whodey
If it asks you, "How about a nice game of chess", in a calm and reassuring creepy voice, run for the hills!!
I heard once from the living room a calm and reassuring creepy voice saying: "How about a nice game of chess".

It was from the TV. But it hadn't any selfawareness, it just said so.