Monday, July 27, 2009

The Rise of the Robots : Open Thread

The Robots are Coming! No, I'm not kidding. And it might not be entirely prudent to just assume they'll be as benevolent as the Jetsons' maid Rosie. Recent news events include the following charming/alarming developments in the field:
  • Robot Built To Model Wedding Dresses
  • Military robots eat human flesh (and the not-exactly-comforting response by the makers of those robots that their biomass-munching creations are strictly vegetarian)
  • Robots can autonomously navigate human space (including opening doors and plugging themselves into electrical outlets to recharge)
So we're left with a potential combination domestic robot that, when you're not home, will dress itself up in your finest clothing, lounge around for hours recharging itself on electricity, and eat your chocolates (if not your cat). As soon as they make one that might drink my scotch, the war is on!!

In fact, the New York Times reported recently that advances in robot technology have begun to accumulate so rapidly that humans are seriously considering what might be the best practices for controlling them:
A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

Their concern is that further advances could create profound social disruptions and even have dangerous consequences.
Coincidentally, such speculation informs the work in our current exhibition by Shane Hope. (See this great review of the show by Nathan Shafer at Pop Tech titled "Collective Formatting of the Future.") In talking with Shane (who's amazingly optimistic about the good that advancing technologies can bring for humankind) about the rise of the robots, he pointed me toward renown cognitive scientist Marvin Minsky who, when asked whether artificial intelligences will inherit the earth, answered, "Yes, but they will be our children." (I believe we've discussed that idea here before). And in light of all the recent chatter about robots, it's a comforting thought, I suppose, until you contemplate the likelihood of "The Bad Seed" scenario anyway...but I digress.

There's a piece in Shane's exhibition that touches on all this titled "Yes, but [grey goo] will be our children."

Shane Hope, "Yes, but [grey goo] will be our children.", 2009, archival pigment print, 48" x 48".

Shane Hope, "Yes, but [grey goo] will be our children" (detail), 2009, archival pigment print, 48" x 48".

"Grey goo," as Shane explains, is a "hypothetical end-of-the-world scenario involving molecular nanotechnology in which out-of-control self-replicating robots consume all matter on Earth while building more of themselves (K. Eric Drexler coined)." With the flesh-eating robots currently already being developed, all we need is for that technology to intersect with self-replicating nano-robots and voila! Brunch R Us.

Such scenarios are, of course, the kind of disastabatory fiction Shane's also interested in helping us stay calm about. The truth of the matter is that no one can entirely predict what might happen when artificial intelligence becomes might represent the biggest boon humankind has ever seen. Or it might look at us the way we do cavemen. Or it might look at us the way we do a nice juicy steak.

Consider this an open thread on what such technologies might mean for humankind and what role artists play in helping us make sense of it all.

Labels: future art, gallery artists exhibitions


Blogger Tom Hering said...

Isaac Asimov's Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey any orders given to it by a human being, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Obeying these laws would require incredible artificial intelligence. A robot's thought processes would have to include empathy, and the ability to make moral judgments about what constitutes harm in an infinite number of circumstances.

Yet something could always go wrong.

"I'm sorry, sir. I thought it most appropriate to quickly tear your arm from its socket when you stated, "This itch is killing me."

7/27/2009 08:42:00 AM  
Anonymous Larry said...

Before I say anything else, could you define "disatabatory"?

7/27/2009 09:04:00 AM  
Blogger Edward_ said...

Before I say anything else, could everyone agree to forgive my utterly hopeless spelling?

"disastabatory" is a recently coined term we discussed once before referencing the penchant among some people to imagine the very worst outcome for any speculative future...more than that it implies they get off on such pessimism, despite no real reasons to assume the worst.

7/27/2009 09:12:00 AM  
Blogger Tom Hering said...

Technovelgy is a site that allows you to keep up with the ways technology is fulfilling the predictions of science fiction.

Scroll down to "ATMs Fight Back With Pepper Spray." If anything can go wrong it will go wrong, so lurking muggers won't be the only thing you'll have to worry about in the future.

Greatest near-future danger: autonomous robots using our streets, and talking on their cell phones at the same time.

"So, I was at Sally Android's party, and that Johnnie Walker droid comes stumbling up to me and says ... (KA-RASH!) ... Shit! Gotta call you back."

7/27/2009 09:22:00 AM  
Blogger mm said...

Scientists are already working on nano-robots to live in the human body for various medical reasons so it seems only a matter of time before there will be microscopic artificial intelligences in human brains.

This grey goo piece by Shane Hope is beautiful and disturbing.

7/27/2009 10:22:00 AM  
Anonymous Anonymous said...

There is nothing more terrifying than a self-aware machine.

Humans are made of ridiculously frail matter, we can not outlive our genetic constraints. However, a robot has limitless opportunity to outlive it's mode of existence. Once machines become self-aware, what is to stop them from operating their sovereignty over humans.

If I were a self-aware robot, I could claim no god as my own. With man as my creator, my life would be nothing but an existential wasteland. In such a world I would likely have no choice but to rid the earth of the lesser carbon based life forms which already exist on borrowed time.

Long live the new flesh! ( a far more terrifying vision than videodrome)


7/27/2009 10:25:00 AM  
Blogger George said...

when artificial intelligence becomes self-aware...

If "self aware" is taken to mean "conscious" then it will NEVER HAPPEN.

7/27/2009 10:26:00 AM  
Blogger Edward_ said...

I think self-aware does imply conscious, George. Why do you think that will never happen?

7/27/2009 10:32:00 AM  
Blogger Edward_ said...

I have to admit, when watching the video of the robot plugging itself in, my first thought was "ahhh...that's so cute!" I suspect we'll have such feelings about most robots' achievements as they become more and more humanlike.

7/27/2009 10:38:00 AM  
Anonymous Cedric Casp said...

We already have lots of technologies that can work for us, but we haven't moved past the thinking that work is value for most human beings, and more and more people loose jobs because more machines (or applications) can do the work for them, and this is going to lead to major social crashes unless someone considers that designing the robot that does everything begs the redesigning of the social-economical system. "Work for us" should also imply "bring us money and candy" (although I think in the future you will go to the candy directly, no money involved).

Thinking about new efficient ways to make war is pointless. There is already a couple red buttons in this world (perhaps some even private, not government-related) that if some wacko decides to press, will fuck us up really deep.
Unless they mean a robot war while you're in your bunker barely surviving?

We need to work on systems that
ease deprivations to a degree where making wars would seem absurd. That involves more sciences than AI and Robotics,
but robots would be great if they could build architecture from anything they find around them (not just the junk like in Wall-E,
but make bricks out of earth, sand,
rocks and whatnot).

Cedric Casp

7/27/2009 10:49:00 AM  
Anonymous Anonymous said...

Owning a robot seems like owning a human slave without the guilt though growing food for machines amidst human starvation might deter a few from owning them.

But...I'd like to order a soft little parrot bot. It would sit on my shoulder and help me with my interpersonal skills. It would explain things.


7/27/2009 10:51:00 AM  
Anonymous Cedric C said...

++Owning a robot seems like owning ++a human slave

That's why they need to remain machines. Too much AI implies the ethic that your machine shouldn't be a slave anymore.

Cedric C

7/27/2009 10:56:00 AM  
Blogger George said...

Why? First of all we do not even know what consciousness is.

I sit here aware of the fact that I'm typing this out, or at least I 'think' I'm typing this out, but I don't know how or why I'm aware of this, or even if it is a true state, maybe I'm dreaming, maybe you're wondering when the period will come. <- right there.

If we don't know what something is then it's pretty hard to define a set of algorithms that will make it happen. A program which passes the Turing test, just passes the Turing test, meaning we are unable to tell if we are speaking with a computer (program/algorithm) or a person. We have no way of knowing whether or not the program is conscious.

That's the simple answer.

The hard answer is that I believe that "consciousness" is quantum-like. At any moment it exists in all possible states and only becomes "conscious" when it's poked in the eye (measured, looked at, exercised as an act) In other words it won't succumb to modeling because any model is as good as any other and the programmer won't know until its time to be conscious.

This means that the 'program' is never the same from instant to instant, and never produces the exact same results at two points in time with any degree of predictability.

I do think it's possible to make a very-smart computer which can make decent decisions. I think it will prove to be much harder, maybe impossible to make a computer which feels fear or love, and without them it won't be conscious,

7/27/2009 10:57:00 AM  
Blogger THX1138 said...

Machines have already effectively taken over our society. It doesn't matter if machines are sentient or not - humans are sentient and you are making the assumption that moral impratives by humans are superior to those of machines. The assumption here is that machines lack emotional content and therefore will continue the program to the end, like a slowly detonating atomic bomb. where humans will stop short of total anihilation, the program is a force of nature.

What is left out of this doomsday scenario is that machines will have counter machines. For every drone, wetware, or symbiote or nanocloud, there will be counter measures.

It will be as if man has moved back into the jungle and its incredible ecosystem.

TO some this will be a dystopian scenario - man vs. machine. To others it will spell the end to gender bias and usher in a new era of functionally omniscient leather pants wearing immortals who will seed the universe with their perfectly designed exoskeletons and finely tuned cognitive modules.

How will we monetize infinite access to information? Will humans be slaves to their technology? How can you tatoo your tatoo? What games do you play when you win at everything?

These are questions that have yet to be answered properly.

Marvin Minsky's "Society of Mind" is a collection of essays on various topics. I would summarize the debate on AI as digital vs. analog or discrete vs. continuous.
WIth the ultimate problem being "consciousness of context."

It will happen, George. But maybe as genetically engineered brains instead of electronic parts.

What is interesting is that even "digital" computers have analog parts, even the ones that are supposed to make ones and zeros rely on the very analog process of electrical flow, concentration, and dissipation. How like the ocean!

7/27/2009 11:01:00 AM  
Anonymous Anonymous said...

Cedric: Please explain 'Al' so I can understand you. But yes, they should remain machines though even if they do, once they cross the uncanny valley, we will start relating to them differently and I wonder if this will influence how we relate to each other.

7/27/2009 11:05:00 AM  
Anonymous Anonymous said...

Al=artificial intelligence - I'm slow. If I'd only had that parrot bot.


7/27/2009 11:12:00 AM  
Blogger Mery Lynn said...

Off topic:

What is your take on this reality show with artists? I was somewhat surprised that Jerry Saltz was one of the jurors.

7/27/2009 11:22:00 AM  
Blogger George said...

Military Robots and the Laws of War

7/27/2009 11:22:00 AM  
Anonymous Anonymous said...


I think that there could actually be a state of awareness free of emotion. Even in humans we see varying states of emotional capacity and most humans have a sense of will and a basic awareness of their own independent existence.

I think that awareness is merely the tip of the iceberg with respect to consciousness. For me, it's not a huge leap at all to consider that machines will develop a sort of awareness sooner than we think.In my mind this awareness will actually be achieved on a mass scale, something of a networked awareness among computers.

Technology essentially functions as a collective, so there wouldn't be any need for a computer to maintain a sense of individuality, it would be one with the whole of artificial intelligence. Awareness unbound by matter.

Considering that a computer were to achieve awareness, can you imagine the impact this would have upon our relationship with technology?

Incidentally, it is far more likely that humans will attempt to fuse technology with our own consciousness through implants and various devices which will erode our own humanity through some evolutionary paradigm shift.

Tomorrow promises more weird visions and we don't even know what's in store for us come 2012.


7/27/2009 11:23:00 AM  
Blogger Brandon Juhasz said...

this is a great song by the Futureheads called Robot:

I am a robot, living like a robot, talk like a robot, in the habititting way

Look up to the sky (robot), you can trample ove me (robot)
Do anything you do, now the ground has gone

I am a robot, living like a robot, talk like a robot, in the habititting way

In the future we all die (robot), machines will last forever (robot)
Metal things just turn to rust, when you're a robot

I am a robot, living like a robot, talk like a robot, in the habititting way

The best thing is our life span (i don't mind)
We last nigh on hundred years (i don't mind)
If that mean's we'll be together i don't mind
I have no mind, i have no mind

Im programmed to follow you (robot), do exactly as you do (robot)
Now my nervous system's blue , i feel fine

I am a robot, living like a robot, talk like a robot, in the habititting way

The best thing is our life span (i don't mind)
We last nigh on hundred years (i don't mind)
If that mean's we'll be together i dont mind (i have no mind)
The best things last a life time (i have no mind)
When you age i will not change (i have no mind)
I think i'll be around forever if you dont mind

I have no mind, why don't (i have no mind) (robot)
I have no mind, why don't (i have no mind) (robot)
I have no mind, why don't (i have no mind) (robot)
I have no mind, why don't (i have no mind)

7/27/2009 11:24:00 AM  
Anonymous Cedric Cas said...

I agree with lots of what you said, George, but unfortunately there is bio-technologies, or I don't know how they call it. Fusion of ADN with synthetic molecules, who knows what could be achieved.

Whatever is reality I would say we entirely live it as a dream. Awareness would imply the possibility of dreaming reality.

Cedric Casp

7/27/2009 11:27:00 AM  
Blogger zipthwung said...


or on you tube:


Cyberpunk bro! Then the dot com bubble burst and everyone went green.

7/27/2009 11:27:00 AM  
Blogger George said...

“in from three to eight years we’ll have a machine with the general intelligence of an average human being ... a machine that will be able to read Shakespeare and grease a car.” Marvin Minsky (MIT) in Life magazine, 1970.

7/27/2009 11:28:00 AM  
Blogger Tom Hering said...

Marshall McLuhan and others have pointed out how progress always involves both gain and loss. The introduction of something that's new and better doesn't just mean the replacement of something that's bad or lesser - it also means the crowding out of other things that were good.

News item: Teenagers in Los Angeles have grown up with extremely well-lighted streets, but have never once seen the stars above them. When they've been taken outside the city for the first time, at the age of 16 or 17, they've felt shocked by the night sky. (Arthur C. Clarke's "City and the Stars" come true, though LA's not underground.)

Another example: Cars quickly get us to where we're going. But so many people go everywhere in a car - even to a store just two blocks away - that they've never experienced the pleasures and surprising encounters that only walking somewhere (or nowhere) can provide. (Ray Bradbury's "The Pedestrian" come true, though the police don't stop us yet just because going out for a walk is not a normal thing. Oh wait, yes they do.)

What good things will get crowded out of our experience of life as autonomous robots become commonplace?

7/27/2009 11:29:00 AM  
Blogger George said...

Logic, DNA, and Poetry

7/27/2009 12:05:00 PM  
Blogger Joanne Mattera said...

I experienced something of computer takeover a few years ago when I installed an updated anti-virus system (Norton). This "friend" soon became a foe.

A few days after installation, when I went to access some images of my artwork—images that I saw in thumbnail form on the screen—a message came up saying "Image unavailable." I tried another image; same message. Then another; same thing. I panicked when I saw that the first four or five images in each folder were visible in thumbnail but not in actual fact. I felt like the astronaut in 2001 at the mercy of HAL.

In fact, the heuristic algorithm (that's the HAL, a system that learns and develops as it goes along, becoming able to identify viruses that didn't exist when it was programmed) was seeing images of my paintings as viruses to be contained. Before too much damage was done, my computer guy removed Norton and installed McAfee in its place. I had backup CDs so though it took almost a week, I was able to restore my images. I now have a much better backup system.

P.S. Ed, I'd feel much more included in the race is you referred to us as "humankind."

7/27/2009 12:08:00 PM  
Blogger Edward_ said...

I'd feel much more included in the race is you referred to us as "humankind."

Horribly lazy writing habit of mine. My apologies. Post updated.

7/27/2009 12:15:00 PM  
Blogger Joanne Mattera said...

You are such a nice human. Thanks.

7/27/2009 01:02:00 PM  
Blogger Edward_ said...

maybe...or maybe I'm a nice artificial intelligence that only looks could one tell?

...OK, so that's an easy gentle ;-)

7/27/2009 01:05:00 PM  
Blogger George said...

Further reading:
The Trouble with the Turing Test by Mark Halpern

"... But this defense fails, because we do not really judge our fellow humans as thinking beings based on how they answer our questions—we generally accept any human being on sight and without question as a thinking being, just as we distinguish a man from a woman on sight. A conversation may allow us to judge the quality or depth of another’s thought, but not whether he is a thinking being at all; his membership in the species Homo sapiens settles that question—or rather, prevents it from even arising. If such a person’s words were incoherent, we might judge him to be stupid, injured, drugged, or drunk. If his responses seemed like nothing more than reshufflings and echoes of the words we had addressed to him, or if they seemed to parry or evade our questions rather than address them, we might conclude that he was not acting in good faith, or that he was gravely brain-damaged and thus accidentally deprived of his birthright ability to think..."

7/27/2009 01:10:00 PM  
Blogger George said...

Further reading:
Why Minds Are Not Like Computers by Ari N. Schulman

"People who believe that the mind can be replicated on a computer tend to explain the mind in terms of a computer. When theorizing about the mind, especially to outsiders but also to one another, defenders of artificial intelligence (AI) often rely on computational concepts. They regularly describe the mind and brain as the “software and hardware” of thinking, the mind as a “pattern” and the brain as a “substrate,” senses as “inputs” and behaviors as “outputs,” neurons as “processing units” and synapses as “circuitry,” to give just a few common examples.

Those who employ this analogy tend to do so with casual presumption. They rarely justify it by reference to the actual workings of computers, and they misuse and abuse terms that have clear and established definitions in computer science—established not merely because they are well understood, but because they in fact are products of human engineering. An examination of what this usage means and whether it is correct reveals a great deal about the history and present state of artificial intelligence research. And it highlights the aspirations of some of the luminaries of AI—researchers, writers, and advocates for whom the metaphor of mind-as-machine is dogma rather than discipline."

7/27/2009 01:11:00 PM  
Blogger Edward_ said...


If I could ask that, in the context of the thread, you summarize opinions and cite sources rather than assigning us readings. It's difficult to keep the dialog flowing when we all need to run off and read a few books to get your point. Long quotes like those often require more context to fully understand their relevance and/or potential shortcomings. I'm more interested in your conclusions based on those readings.

My understanding is that the big stumbling point for most people in considering the feasibility of "the singularity" is the notion that until AIs can express emotions that we're essentially not going to see them "thinking" like humans. There is some research now, however, that points to the opinion that even emotions are merely quantitative mental calculations and that, should one grow large enough in its computational power, a computer could indeed express emotions indistinguishable from those of humans and in what would seem to us appropriate responses. Moreover, there are advances in affective computing (i.e., teaching computers to recognize emotions in humans), leading to where you could conceivably have as meaningful a conversation with a computer about being dumped, for example, as you might any human at some point in the future.

7/27/2009 01:25:00 PM  
Blogger George said...

Another analogy: There is a mathematical problem, known as the (n)body [problem which attempts to predict (calculate) the positions of n objects in orbit. It is possible to program this on a computer and plot the results, which I've tried. With two objects, given initial position, velocity and mass, it's possible to get a predictable plot. But for solutions of more than 2 bodies, forget about it, the result is chaotic and so dependent on initial conditions that it makes the prediction un-useable.

Predicting the weather has similar difficulties.

Yet in both cases there are "real results," the weather changes, and the sun, earth and mood move in their orbits is a somewhat expected way even though we cannot precisely say where all three exactly are.

I think consciousness is somewhat like this, we can be aware that it exists but we cannot say precisely how it exists algorithmically.

Could there be a program which was conscious in a way we don't know about? Would a 'robot' programmed to recognize itself, recognize itself in a disguise?

7/27/2009 01:49:00 PM  
Anonymous Dalen said...

The notion of progress as always being good (much like growth and expansion always being good) is an illusion. Does anyone really prefer the taste of food cooked in a microwave? But hey, it's fast, and every home's gotta have one, right?

Humans seem to be preoccupied with creating solutions to problems that do not exist, and in the process messing things up and making new problems. Why do we do that? Because we can. And sometimes because there's money to be made.

7/27/2009 02:13:00 PM  
Blogger Tom Hering said...

Dalen, I believe it was Erich Fromm who said - speaking of technology - "Just because we CAN do something doesn't mean that we SHOULD."

"Now I am become Death, the destroyer of worlds." So remarked Robert Oppenheimer (co-creator of the atomic bomb) who also said, "It is perfectly obvious that the whole world is going to hell. The only possible chance that it might not is that we do not attempt to prevent it from doing so." In other words, we have a talent for screwing things up, so we ought to proceed very cautiously with schemes to make ourselves safer or better or more advanced.

7/27/2009 02:40:00 PM  
Anonymous Dalen said...

As far as what role artists play in helping us make sense of it, I think it's valuable to translate ramifications for the future into visuals (as Shane Hope does).

What I'm wondering is, if robots become self-aware, will they start making their own art?

7/27/2009 03:04:00 PM  
Blogger zipthwung said...

Anybody who digests sci-fi knows all this jive talk (neuromancer, snow crash, enders game, dream park, the coming technological singularity, to name a few).

What will visual art function as - a sanitizing influence? A hammer? A monkey wrench?

I'm not so into Mark Pauline, Tanguley, Giger or Arnoldo Pomodoro, Burning Man, or even say Picasso's modern primitive monkey/car hybrid bronze.

So many spectacular sand castles, so little forward thinking analytical vision.

I think genetic engineering will produce the results AI evangelists are looking for - machines born in vitro with a conscience and "superhuman" intelligence. Or didn't anyone watch that program?

Enjoy your algorythm.

7/27/2009 03:12:00 PM  
Anonymous Dalen said...

Tom, I totally agree.

zipthwung, do you mean a sanity-inducing influence or a sterilizing one? ;)

7/27/2009 03:47:00 PM  
Blogger George said...

Ed, Sorry. I put up the links with the intention that those who are interested might want to read more. Yes the articles were long. what I thought was more or less in my first couple of remarks.

7/27/2009 03:58:00 PM  
Blogger zipthwung said...

According to foucault the institution perpetuates systems of control. The internet is one such institution, I believe. Video games and internet porn keep people off the streets.

7/27/2009 04:28:00 PM  
Blogger David Cauchi said...

I for one welcome our new robot overlords.

Unfortunately, however, I can't see them sticking round here and cleaning up our messes for us. Not when there's a big old universe out there to explore and seed with life.

Then again, maybe that's how this whole sorry tale got kicked off on this ball of rock.

Oh, and George is far too complacent. How many times have our finest thinkers declared something impossible right before it happened?

The most likely way for AI to develop is through an evolutionary process, rather than being designed. The algorithms will model themselves.

7/27/2009 05:37:00 PM  
Blogger George said...

David, I don't think I'm being complacent, I think I'm being realistic about the enormous difficulties involved. I'm also not saying they shouldn't try, regardless of the degree of their success, the research is bearing other fruits.

There are some things which we know to be true, fusion works on the sun, the weather is predictable a week in advance but not two years in advance, both are hard to duplicate or model. I realize that anything that exists, human consciousness for instance is possible and therefor in theory duplicatable. But it may not be practically duplicatable.

7/27/2009 07:11:00 PM  
Anonymous Cedric Casp said...

This topic of consciousness is complex and involves many hypothesis trying to fill up what they call the "explanatory gap".

What is interesting about life is that you could describe the phenomena as striving toward perception and intelligence. Well, one would need to debate wrether the first living cells were "sensorial" beings (reacting to the sun and space around them). But it seems as if the slow growing toward better perceptual and intelligent species is some bizarrely unaware motivation or unintelligible reasoning (instinct?) behind life. As is life had a sense: it wanting to "perceive" and evolving toward that goal, in the meantime trying to duplicate and protect its precarious system (making babies, eating VS defense mechanisms).
The notion of sentiment I find is important in that goal of both perceiving and protecting. Emotion is as much a great step toward the conscious than intelligence. This is how I would separate awareness and consciousness. I think the conscious involves a sense of moral, that is informed by an "emotional intelligence" which can registers empirically what it believes is wrong or right according to what it remembers from experience.

If you have an AI that is aware but unable to develop the most minimal sense of moral, than it is not conscious. For example, my cats have limited consciousness because they know when they do wrong. They run and hide. I don't think this is merely mechanical. Instinct seems to precede intelligence and is a mixture biological desire, emotion, and... moral (!). Emotions are probably built on bodily pains and releases: feeling bad and feeling good start physically. The baby generally doesn't cry out of emotion, but crying evolves into emotions. The problem with an emotional and "moral" consciousness is that it builds up its own system of belief rapidly, and the "moral" or "emotion" takes over a more rational awareness of existence (hence religious groups).

Anyways, I don't have the expertise so I shouldn't comment (lol). But I'd be interested to attend a symposium on this topic, and hear the latest news. There is a strong esoterical push in me that wants to believe in an universal field of consciousness as it's been promoted by spiritual gurus for millenaries, but I'm aware that my conscious is probably fooling itself (again) from a lack of observable informations.

Cedric Casp

7/28/2009 02:17:00 AM  
Blogger Tom Hering said...

What about skin and touch? Isn't consciousness largely body consciousness? Consciousness of the body and its sensory flows?

What about the interaction of mother and infant? How can consciousness develop apart from the tactile interaction of flesh and flesh? What about physical exploration (crawling, walking, acting upon objects)? How can consciousness develop apart from the tactile interaction of flesh and world?

In other words, how can you have a human mind, or anything remotely resembling a human mind, without a human body? It's not as if mind and body exist separately.

7/28/2009 08:07:00 AM  
Blogger Edward_ said...

Funny you should ask, Tom:

7/28/2009 08:17:00 AM  
Blogger Tom Hering said...

That's very interesting. Though if something were seriously wrong with my child, I'd kick the Kiosk and head down the hallway - banging on doors until I got a human nurse or doctor to help me.

I can't imagine feeling good about the technology - not even if I went in with a minor problem involving me alone. But then, I'm not an actor in a promotional video. ;-)

I remember my recovery period last year. I went to see my surgeon for a follow up. I informed her of a new development and asked, "Is that unusual?" "Yes," she replied. "I guess that makes me a medical freak," I proffered. "No more than before," she answered with a grin.

Medical Kiosk: $750,000.00

Surgery: $26,000.00

My surgeon's sense of humor: Priceless.

7/28/2009 08:51:00 AM  
Blogger George said...

Does something have to be living to have consciousness?

What defines alive? Is a virus alive?

Certainly a bacterium is alive and it can/does react to its environment. This means it has, or performs some sort of sensory response to outside stimuli. This means it's capable of sensing but does this fact make it conscious? This would also answer Toms questions about touch etc.

A bacterium will move away from a negative stimulus but I doubt that it is conscious in the way we mean. So, in the higher organisms, at what species, or developmental stage do we assign "consciousness" as opposed to instinctive reaction.

Can an organism be self aware and dumb?

7/28/2009 11:37:00 AM  
Blogger Tom Hering said...

George, there is consciousness as we mean it when there is love. Can AI's love? Only in Kubrick/Spielberg movies. The depiction of robots who love is a metaphor that says WE are more than machines.

7/28/2009 12:07:00 PM  
Blogger George said...

Like Cedric said, the copic of consciousness is complex and you have to be self aware to discuss it -- puts us all in a special category not currently occupied by computers.

But oh my god (small g) what if my computer is secretly and quietly conscious of itself -- then every time I reboot (that old shoe metaphor slips in again) does my computer experience the equivalent of "Groundhog Day?"

Love is consciousness, self awareness is required in order to separate awareness between "me" and "you." There's a long way to go here bus somewhere along the way there will first be "Robot Sex - It knows how you like it!"

7/28/2009 01:24:00 PM  

Post a Comment

Subscribe to Post Comments [Atom]

<< Home