?

Log in

No account? Create an account

Previous Page | Next Page

Parasha: I Robot: November 23rd reading

What, me forget about the Parasha reading until reading desdenova's post? Madness! *shifty eyes*


*ahem* Anyhow, I enjoyed this one more than most of the others. It was a three laws story, but not in the same formula really as the others, which was helpful. Rather than be faced with a problem in set laboratory conditions and having Our Heroic Researchers have to figure a way out of it using the Laws, instead we're faced with a human story that just happens to involve robots and the Laws, and figure out how they work into it.

I think that probably doesn't make much sense typed out, but I know what I mean!

Anyhow, there's still a bunch of name-dropping of uber-scientific concepts that aren't explained at all, which is still a bit jarring to my literary sensibilities that have gotten used to modern sci-fi explaining the hell out of everything. On the other hand, reading about a political campaign where "issues" about the actual candidate are raised and artificially focused on to try to divert the campaign in one direction or another, as opposed to focusing on platforms... that's totally familiar.

Some other points:

1) It's interesting that really, we're talking about cyborgs here, in a way, rather than actual robots. Or maybe not... since cyborgs are usually more like human minds in a machine body, not machine brains and structure, encased by a human shell. So... I dunno? But at any rate it was a non-issue at the time, since robots as a concept were still so new and being sounded out and developed, and I guess the further evolution into sub-concepts like cyborgs or androids/replicants or whatever was still way off.

2) The whole concept of growing a human shell using human ova and so on is just... creepy. Eek!

3) Not having a sciencey background myself, I am not even going to attempt to comment on X-ray reflection or what have you. I have no idea how it would work, or how plausible it is.

4) Did anyone read this story *not* figuring that Stephen was a robot, and just trying to figure out how he was going to pull off some hoodwinking? Although I did find myself wondering maybe if the issue would turn out to be that he was gay, when "John" was introduced, knowing how non-well that go over even in comparison to today. But I didn't find it very likely, especially since I couldn't imagine how that would work into him not eating in public, sleeping often, etc., very well.

5) Yargh, MORE seesawing on the First Law. So now a robot *would* be able to kill a human being if it would help larger numbers of them. I... I... argh. Just... blah.

Hrm, it seems like there was something else, since I'm feeling rushed now due to having forgotten at first, so I guess if I think of anything I'll have to add it in comments.


Link to the whole one more story in the schedule!

Comments

( 24 Notes — Write a Footnote )
(Deleted comment)
stormfeather
Nov. 23rd, 2009 09:27 pm (UTC)
Considering that this is the only, well, Asimov work I've read (that I can remember)....
(Deleted comment)
stormfeather
Nov. 23rd, 2009 09:37 pm (UTC)
Honestly? I've never been nearly as much into the sci-fi end of the spectrum as the fantasy stuff.
scifantasy
Nov. 23rd, 2009 10:29 pm (UTC)
So now a robot *would* be able to kill a human being if it would help larger numbers of them. I... I... argh. Just... blah.

montoya is right--in later days (and, hell, later Robot stories you'll run into soon) the concept of "humanity is greater than a human" gets introduced.

And is not comforting, to me, at least. I think this was intentional on Asimov's part.
scifantasy
Nov. 23rd, 2009 10:29 pm (UTC)
Er. Not introduced. Codified as a "Zeroth Law."
stormfeather
Nov. 24th, 2009 12:19 am (UTC)
Damnit, three laws = three laws. Don't tack on these Zeroth Laws. Grrr!

*stomp stomp*
(Deleted comment)
stormfeather
Nov. 24th, 2009 03:26 am (UTC)
I'm not saying I disbelieve y'all, just that I find the retconning/constant moving of the goalposts at the moment annoying...
scifantasy
Nov. 24th, 2009 06:53 pm (UTC)
Well, that's just it--it doesn't get introduced as "moving the goalposts." Robots do it on their own. You can see some of it coming, even.

Take a robot who has a choice between saving two people or saving one. It can only do one or the other.

It'll either break down, or prioritize and save the two people. The engineers probably want it to save two people, so they give it that flexibility.

Now abstract that. Save a hundred thousand people--a society--or one person? Flexibility means it'll do the former.

Now, turn "save" into "do good for."

And finally, replace "society" with "humanity." Suddenly, a robot can think "if I do something to this person, I can improve humanity."

That's the Zeroth Law.
(Deleted comment)
scifantasy
Nov. 23rd, 2009 10:49 pm (UTC)
I'm not sure if Asimov intended it to be comforting or not.

Hm. I can see the argument--he was apparently surprised that people found the enclosed societies in The Caves of Steel claustrophobic and depressing, because he found them fantastic. (He was an agoraphobe, though; enclosed cities were literally his paradise.)

But then, and this is beyond the scope of I, Robot, Caves also has the discussion of the Spacers versus Earthers and who is better equipped to explore the stars...
(Deleted comment)
scifantasy
Nov. 23rd, 2009 10:56 pm (UTC)
Oh, I'm not disagreeing--and it's been ages for me too. But there's something to what you're saying, especially in the later and more off-a-cliff Foundation novels.
(Deleted comment)
scifantasy
Nov. 24th, 2009 01:49 am (UTC)
I actually haven't read either--I know, I know, gaps in my education--but I'll put them on the list for my Copious Free Time.
(Deleted comment)
scifantasy
Nov. 24th, 2009 06:49 pm (UTC)
But here's something--does Asimov (as distinct from Calvin) think this is how people should behave, or are the superior ethical (so to speak; under the view I'm proposing here, they're not ethics at all) strictures required because otherwise robots--who are physically superior, can survive more extreme temperatures, more punishing conditions, &c.--would supplant humanity entirely? Are the Three Laws the ethics of the future, or the mechanism for continued human dominance over its creations?
(Deleted comment)
khedron
Nov. 24th, 2009 02:23 am (UTC)
[edited to be less annoying]

4) Did anyone read this story *not* figuring that Stephen was a robot, and just trying to figure out how he was going to pull off some hoodwinking? Although I did find myself wondering maybe if the issue would turn out to be that he was gay, when "John" was introduced, knowing how non-well that go over even in comparison to today. But I didn't find it very likely, especially since I couldn't imagine how that would work into him not eating in public, sleeping often, etc., very well.

Huh! I can see how you'd get that, especially with the obvious emotional closeness between the two. However, after our previous discussions, I'm having a hard time not being very conscious of when these were written, to the extent that when he mentioned "ultrawave", I looked up the date on the story and then double-checked when TV was invented. (TV comes first by a long shot.) Anyway, it was less than a decade after this story came out that Alan Turing was prosecuted for the crime of being gay in Britain. I don't think a work this light-hearted and mainstream would tackle that so casually?

All that said, I do slip up in the time consciousness sometimes. "John" was totally Stephen Hawking to me. Right first name and everything.


I liked this story too. No Powell & Donavan, for one! While they leave it open-ended, I can't possibly believe he's not a robot, since Calvin seems to believe it. I do love the idea of creating another robot for him to engage in public fisticuffs with, that's clever.

2) The whole concept of growing a human shell using human ova and so on is just... creepy. Eek!

I didn't think it was creepy so much as science-fiction-y. It's not so far different from either Terminator or what people are working on today. Asimov handwaves the difficulty in getting the human eyes to connect to a positronic brain, and all that, but I can go with that for the sake of suspension of disbelief. I think. At least, if I can overlook the persuasive arguments of Lanning et al. about the availability of positronic brains on eBay, then I can get past the part about the eyes.


This week's outside tidbit: The local paper had a story on updating Asimov's Three Laws. (If the link goes stale, I saved away the PDF and can upload it.) I wasn't too impressed by their changes, since no matter how much grief we give him, or how much joy Asimov gets out of exploring edge cases, these "updated" laws seem so vague as to be meaningless. Realistic, maybe, but lame. Not useful as laws.

Edited at 2009-11-24 02:38 am (UTC)
stormfeather
Nov. 24th, 2009 03:33 am (UTC)
2) I don't know why I find it so creepy, I just do. More creepy than the growing-organs-in-vats thing.

I think part of it is the particular blend.. the thought of having to pretty much perform surgery to just maintain mechanical parts, or whatever. Ewwwwy.

And yeah, those updated laws are... well, yeah, lame.
dscotton
Nov. 24th, 2009 07:28 am (UTC)
I think this was my favorite story in the book. I don't remember as many glaring anachronisms, and meanwhile the nature of the political campaign was as relevant today as it was when he wrote it. I did wonder for a while whether he was really going to turn out to be a robot, or if he was just letting his opponent drag out the issue to make him look like more of an idiot when he was revealed to be wrong, and at the same time bring the issue of robot rights into the public debate.
(Deleted comment)
(Deleted comment)
( 24 Notes — Write a Footnote )