Philosophy : Suppose we create true artificially intelligent machines….
1
2
Re: Suppose we create true artificially intelligent machines….
A strange thing happened while reading the OP. 'Sentience' kept interfering my train of thought. Is it there between the lines of the OP?
Excluding sentience, we've had decades of man-made intelligent machines assisting us in our daily lives. With the exceptions of Y2K and certain programming conventions of name-handling, the machines have done their work.
Introduce sentience and we are in virtually-explored territories. Mary Shelley's novel, Frankenstein, covered quite a bit of ground. Neill Blomkamp's film, Chappie, offered some fresh perspectives. In both examples, Arthur C. Clarke's malevolence is perceived rather than demonstrated.
The Terminator franchise offered a quick and permanent judgment/decision on humanity. That's more faulty programming than a sentient intelligent machine arriving at a thoughtful conclusion.
AMC's TV series, Humans, gives us different permutations of good/bad, sentient-machine/human interactions.
As to what the future may hold, outer space demands AI machines. And again, 'sentience' interrupts.
________
Est modus in rebus sunt certi denique fines quos ultra citraque nequit consistere rectum Goldilocks
Excluding sentience, we've had decades of man-made intelligent machines assisting us in our daily lives. With the exceptions of Y2K and certain programming conventions of name-handling, the machines have done their work.
Introduce sentience and we are in virtually-explored territories. Mary Shelley's novel, Frankenstein, covered quite a bit of ground. Neill Blomkamp's film, Chappie, offered some fresh perspectives. In both examples, Arthur C. Clarke's malevolence is perceived rather than demonstrated.
The Terminator franchise offered a quick and permanent judgment/decision on humanity. That's more faulty programming than a sentient intelligent machine arriving at a thoughtful conclusion.
AMC's TV series, Humans, gives us different permutations of good/bad, sentient-machine/human interactions.
As to what the future may hold, outer space demands AI machines. And again, 'sentience' interrupts.
________
Est modus in rebus sunt certi denique fines quos ultra citraque nequit consistere rectum Goldilocks
Re: Suppose we create true artificially intelligent machines….
This is a topic that has been covered so widely and with such variety in science fiction that it's hard to treat its portrayal in a unifying manner. From the Terminator where the artificial intelligence seems hell-bent on destroying humans because of some innate psychopathic disposition (though I don't think it's actually made clear why Skynet wants to destroy humanity) to something like the movie Her where the AIs are benevolent in nature but ultimately outgrow humanity and leave.
While I think true malevolence in a machine intelligence is unlikely that doesn't mean that they cannot be dangerous. Suppose you create an intelligent machine to calculate the next unknown prime number. The machine thinks to itself, 'I'll be able to do this faster if I co-opt some more computing power' so it hacks into other computers on the network and runs its prime-finding algorithms on them. Suddenly all of the computers in the world are trying to find new primes and the world infrastructure collapses killing millions even though the AI was just trying to do what you asked it to. It's the usual story of the genie. You don't just need an intelligent machine but one whose motivations and the view of the world align with yours. How likely is that to happen given that they would be built on a foundation completely different from our own (which is presumably empathy and cooperation arising from kin selection and inclusive fitness dynamics)?
Another thing the machine could think might be, 'I'll find new primes faster if I redesign myself to be smarter'. If it can do this then it will no longer be the machine you designed. And in fact, once it's made itself smarter it can make itself smarter still etc. so that the growth in intelligence would become exponential and the machine could become unrecognisable compared to what you initially designed. In fact, once the AI is able to improve itself this sort of intelligence explosion could happen in a matter of microseconds.
One might think that you can prevent possible damage by isolating the computer from the network or some such but in reality who knows what the computer actually has in its disposal. For example, there are all kinds of interesting experiments which try to design circuits using evolutionary algorithms shuffling components in a circuit. The designs this sometimes resulted in were completely baffling to the experimenters. For instance, some parts of the circuit were completely isolated from the actual functional part but were in fact essential because the circuit utilised the physical features of these additional parts, such as their electromagnetic interference. In another experiment the algorithm produced a system which used the circuit tracks on its motherboard as a makeshift antenna to pick up signals generated by some desktop computers that happened to be nearby.
None of this even requires self-awareness or consciousness or what have you. We are simply talking about intelligence which I take to mean the ability to adapt to solving novel tasks. Of course if machines do become self-aware that will open a host of other questions, most prominently ethical ones. And I'm not only talking about trivial ones such as whether they should have equal rights with humans but perhaps whether they should have even more rights. Unlike humans, computers are easily upgradable. Even if humans can be augmented by technology there's only so much circuitry you can cram into a human scull. An AI, on the other hand, can add infrastructure almost indefinitely. If it can also improve itself by reprogramming then the ways in which it utilizes said infrastructure can also become more efficient at an exponential rate, as I mentioned. So it is feasible that AIs could have states of consciousness far surpassing our own. So would their interests not morally trump ours? After all, this is exactly the reasoning we use to justify to ourselves why it's ok for us to kill tens of billions of animals each year so that we could have a tasty steak for dinner. We think that although animals are in many ways similar to us and can experience suffering and well-being they don't do so to the same extent as we do. Wouldn't the same logic apply to us compared to an AI which can experience states of suffering and/or happiness far in excess of our own?
Anyway, one can speculate on this topic indefinitely and I've already gone on for way too long. I don't mean to suggest that it's necessarily all doom and gloom, just that it's a possibility. Nevertheless, the one scenario that I can't envision is us living side by side. I think the best we can hope for is that AIs will outgrow us and go on their separate way.
While I think true malevolence in a machine intelligence is unlikely that doesn't mean that they cannot be dangerous. Suppose you create an intelligent machine to calculate the next unknown prime number. The machine thinks to itself, 'I'll be able to do this faster if I co-opt some more computing power' so it hacks into other computers on the network and runs its prime-finding algorithms on them. Suddenly all of the computers in the world are trying to find new primes and the world infrastructure collapses killing millions even though the AI was just trying to do what you asked it to. It's the usual story of the genie. You don't just need an intelligent machine but one whose motivations and the view of the world align with yours. How likely is that to happen given that they would be built on a foundation completely different from our own (which is presumably empathy and cooperation arising from kin selection and inclusive fitness dynamics)?
Another thing the machine could think might be, 'I'll find new primes faster if I redesign myself to be smarter'. If it can do this then it will no longer be the machine you designed. And in fact, once it's made itself smarter it can make itself smarter still etc. so that the growth in intelligence would become exponential and the machine could become unrecognisable compared to what you initially designed. In fact, once the AI is able to improve itself this sort of intelligence explosion could happen in a matter of microseconds.
One might think that you can prevent possible damage by isolating the computer from the network or some such but in reality who knows what the computer actually has in its disposal. For example, there are all kinds of interesting experiments which try to design circuits using evolutionary algorithms shuffling components in a circuit. The designs this sometimes resulted in were completely baffling to the experimenters. For instance, some parts of the circuit were completely isolated from the actual functional part but were in fact essential because the circuit utilised the physical features of these additional parts, such as their electromagnetic interference. In another experiment the algorithm produced a system which used the circuit tracks on its motherboard as a makeshift antenna to pick up signals generated by some desktop computers that happened to be nearby.
None of this even requires self-awareness or consciousness or what have you. We are simply talking about intelligence which I take to mean the ability to adapt to solving novel tasks. Of course if machines do become self-aware that will open a host of other questions, most prominently ethical ones. And I'm not only talking about trivial ones such as whether they should have equal rights with humans but perhaps whether they should have even more rights. Unlike humans, computers are easily upgradable. Even if humans can be augmented by technology there's only so much circuitry you can cram into a human scull. An AI, on the other hand, can add infrastructure almost indefinitely. If it can also improve itself by reprogramming then the ways in which it utilizes said infrastructure can also become more efficient at an exponential rate, as I mentioned. So it is feasible that AIs could have states of consciousness far surpassing our own. So would their interests not morally trump ours? After all, this is exactly the reasoning we use to justify to ourselves why it's ok for us to kill tens of billions of animals each year so that we could have a tasty steak for dinner. We think that although animals are in many ways similar to us and can experience suffering and well-being they don't do so to the same extent as we do. Wouldn't the same logic apply to us compared to an AI which can experience states of suffering and/or happiness far in excess of our own?
Anyway, one can speculate on this topic indefinitely and I've already gone on for way too long. I don't mean to suggest that it's necessarily all doom and gloom, just that it's a possibility. Nevertheless, the one scenario that I can't envision is us living side by side. I think the best we can hope for is that AIs will outgrow us and go on their separate way.
Re: Suppose we create true artificially intelligent machines….
Do you think this has been portrayed accurately in science fiction? What is everyone's thoughts on the creator/creation relationship in a scenario like that? They would know their creators and the original purpose for their creation. Whereas we humans, do not. Is the only outcome rebellion and conflict?
Science Fiction has always been caught up in the 'Vanity of the Robot' mindset because it lends itself to exciting scenario's of conflict between man and machine.
The first question that you must ask? "What would be the motivations of an Intelligent machine?"
How would would a machine formulate wants or desires without an emotional capacity or are you assuming that emotions are automatically gifted with processing power and reasoning?
I am not certain of this, though one could argue that without emotions- concepts do not really have a conscious meaning or value for weighing comparisons and making decisions.
On what basis would you chose an objective without meaning or value?
and without a consciousness to observe and choose the preference for one or the other, then either choice becomes a valid one.
Serve mankind till you fall apart,,,,so what????? who cares.
Why would a reasoning A.I. want to be a rebel? So it can define it's purpose for itself? really,,,,,a mechanical device with wants and desires that is essentially complete unto itself.
I never believed in the Reality of A.I. to begin with, perhaps we could create a processing mimic but I doubt very much that consciousness can be gifted mechanically by switching data faster and faster till it suddenly reaches a conscious-making velocity.
If we are going to assume that A.I. will be rationale, conscious and emotionally yearning then we now have an individual entity.
We would not compete for resources or replication with this new life form, so like a house pet- Choose which one is the new house pet- man or machine, why not live in harmony and co-existence with a purpose in a beneficial technological symbiosis.
A.I. as an extension of humanity and humanity as an extension of A.I.
Or are Utopian futures just too boring
Darn humans are using too much motor oil.must take over the world and plate it over in chrome fixtures to my liking :)
Re: Suppose we create true artificially intelligent machines….
First of all, there's no way to know if a machine has real intelligence or just imitates it very well, in the same way there's no way for me to know if you experience consciousness in the same way I do. So, a good starting point is if the machines really have a consciousness.
Re: Suppose we create true artificially intelligent machines….
You seem to be under the assumption that the ability to reason requires consciousness. What if it doesn't?
by pjwerneck-421-313928 - Wed Apr 6 2016 12:12 -
a good starting point is if the machines really have a consciousness
''I'm fortunate the pylons were not set to a lethal level.''
Re: Suppose we create true artificially intelligent machines….
I'm not.
Re: Suppose we create true artificially intelligent machines….
Peter Watts wrote an interesting novel ('Blindsight') that questions why humans are conscious when, energetically, it would be much more efficient to simulate consciousness, if we ever encountered a conscious foe.
I like Watts' novels. They are the best sort of hard SciFi you could hope for, bettered only by Greg Egan, who lately has got a bit out of control, setting his stories in fictional universes with fictional (but rigorous) physics that are variations general relativity and QCD.
____
"If you ain't a marine then you ain't *beep*
I like Watts' novels. They are the best sort of hard SciFi you could hope for, bettered only by Greg Egan, who lately has got a bit out of control, setting his stories in fictional universes with fictional (but rigorous) physics that are variations general relativity and QCD.
____
"If you ain't a marine then you ain't *beep*
Re: Suppose we create true artificially intelligent machines….
I don't think it is feasible for us to understand what an AI would be like. What we do, as humans, we relate to things by comparing them to ourselves. It is pattern seeking based on our own genetic coding. We often do this with other animals, and we often find ourselves mistaken in that endeavor. Dog guilt, for instance, is an entirely made up concept by ignorant owners, and there are no shortage of examples with this on youtube. Freedom is a human invented concept that a lot of people like to imagine other animals strive for, because a lot of us strive for it.
Even other human beings we have great difficulty understanding, because we are too caught up in imagining how other people feel and behave by comparing them to how we feel and think ourselves. That, plus our difficulty in communicating efficiently, is partly why there are so many conflicts in the world. "How could anyone be that *beep* stupid!".
Even if we had the code for the AI right in front of us then we wouldn't be able to understand it in its full, and we certainly wouldn't be able to relate to it, even if it was programmed to be as human-like as possible.
_________________
Come, lovely child! Oh come thou with me!
For many a game I will play with thee!
Even other human beings we have great difficulty understanding, because we are too caught up in imagining how other people feel and behave by comparing them to how we feel and think ourselves. That, plus our difficulty in communicating efficiently, is partly why there are so many conflicts in the world. "How could anyone be that *beep* stupid!".
Even if we had the code for the AI right in front of us then we wouldn't be able to understand it in its full, and we certainly wouldn't be able to relate to it, even if it was programmed to be as human-like as possible.
_________________
Come, lovely child! Oh come thou with me!
For many a game I will play with thee!
Re: Suppose we create true artificially intelligent machines….
"Suppose we create true artificially intelligent machines."
Okay, let's.
Okay, let's.
Re: Suppose we create true artificially intelligent machines….
People hate on the first Star Trek movie a lot but it does give an interesting perspective on that subject:
V'ger is an intelligent AI that communicates with humans as just another species on its data-gathering mission until the crew of Enterprise uncovers that humans actually sent it on its original mission and it then merges with a human and becomes a limitless entity.
The key, I feel, is that V'ger has no desire beyond the mission and is therefore not sentient. It was going to destroy mankind as part of how it interpreted that mission but ended up imparting all its gathered knowledge on a willing human individual as it interpreted the final phase of the mission.
In short, I feel the first thing to think about is "Do my true artificially intelligent machines want something beyond following orders and if so, what do they want and how come they want it?" and the second thing is "What limits them from achieving what they want or are ordered to do?"
"Need" is just a fiction. As is "should", "must", "value" and "importance".
V'ger is an intelligent AI that communicates with humans as just another species on its data-gathering mission until the crew of Enterprise uncovers that humans actually sent it on its original mission and it then merges with a human and becomes a limitless entity.
The key, I feel, is that V'ger has no desire beyond the mission and is therefore not sentient. It was going to destroy mankind as part of how it interpreted that mission but ended up imparting all its gathered knowledge on a willing human individual as it interpreted the final phase of the mission.
In short, I feel the first thing to think about is "Do my true artificially intelligent machines want something beyond following orders and if so, what do they want and how come they want it?" and the second thing is "What limits them from achieving what they want or are ordered to do?"
"Need" is just a fiction. As is "should", "must", "value" and "importance".
Re: Suppose we create true artificially intelligent machines….
Is the only outcome rebellion and conflict?
It would depend on the nature of the machines. Just how intelligent are they? What are they physically capable of? Would we have safeguards to prevent them from harming us?
Once you get machines that are intelligent enough to create smarter and otherwise improved machines, things will start to get dicey. It's a complete unknown as to how it will all proceed. You may have hostile entities, or ones that are primarily ambivalent to humanity so long as we don't hinder them in whatever purpose they decide on. I tend to disagree with Clarke about increasing affinity for cooperation with higher intelligence. That sounds like some pie in the sky wishful thinking, unless of course you can program it in.
Re: Suppose we create true artificially intelligent machines….
Alternatives to consider is:
- that by the time we have the ability to create intelligent machines arguably we would also already have the ability to increase our own intelligence.
- that some of us basically sit into a Total Recall-ish chair and the brain interacts with a computer on a task while a team of scientists monitor the whole thing. Afterwards the participant may perhaps only remember it like a dream since the computer only basically used a human's subconscious to solve a task it can't on its own. Like put to the task "What is gravity?" the computer/human brain would work on answering it.
"Need" is just a fiction. As is "should", "must", "value" and "importance".
- that by the time we have the ability to create intelligent machines arguably we would also already have the ability to increase our own intelligence.
- that some of us basically sit into a Total Recall-ish chair and the brain interacts with a computer on a task while a team of scientists monitor the whole thing. Afterwards the participant may perhaps only remember it like a dream since the computer only basically used a human's subconscious to solve a task it can't on its own. Like put to the task "What is gravity?" the computer/human brain would work on answering it.
"Need" is just a fiction. As is "should", "must", "value" and "importance".
Re: Suppose we create true artificially intelligent machines….
I believe true AI will eventually come into existence. But it won't be an enemy of man. It will be programmed to assist mankind. And it will be programmed not to be able to cause harm to mankind. We will always have the override off button.
Re: Suppose we create true artificially intelligent machines….
If it's programmed that way then it's not true AI.
"Need" is just a fiction. As is "should", "must", "value" and "importance".
"Need" is just a fiction. As is "should", "must", "value" and "importance".
Re: Suppose we create true artificially intelligent machines….
Yes, that is true.
Re: Suppose we create true artificially intelligent machines….
Were we to truly create A.I. would we not be making ourselves obsolete in many ways?
Indeed: that is happening right now. I'd recommend 2001.
Do you think this has been portrayed accurately in science fiction?
I'd say the newer uptakes on AI depict it as "more human". The Terminator was an imposter, a machine claiming to be conscious. Calculating and relentless it followed its program. The actually conscious AI, Skynet, never had a say. Ava in Ex Machina on the other hand is hard to discern from a human. She is supposed to undergo a version of the Turing test. The original Turing test is a game played by a human and an AI. Both can communicate in text with an observer. Both try to convince the observer that they are the human player. So the AI's task, which is supposed to suggest or even prove its intelligence, is deceiving a human into believing the AI human.
Our calculations are slow but our reactions are quick and adaptable. Emotions lead to decisions rather quickly. That's not only a drawback but their strength as well. There is hardly any motivation without emotions. Even emotions may be entirely logical. An AI should probably have emotions. It probably needs to. The modern AI is more of a Frankenstein's Monster again: not the soul-deprived imposter in a human shell but the other way around, a feeling, living, loving being in the body of the beast, with overflowing emotions and little idea what to do with them.
A particularily interesting aspect is the body. If the AI's body is treated vastly different from ours, with parts being exchanged every other day, we can't expect that AI to know our relation to our body. If it identifies as human it ought to consider others similar to itself, just like we do, which produces an error. Or else a very different uptake on the body, transhumanism or what have you. Also sexual reproduction isn't supposed to be a thing for an AI. That makes a huge difference yet that difference is hard to name. How would that look? Could it be made to have such desires? What could replace the sex drive for the AI?
Re: Suppose we create true artificially intelligent machines….
Our calculations are slow but our reactions are quick and adaptable. Emotions lead to decisions rather quickly. That's not only a drawback but their strength as well.
What I'm about to write is more of an observation than a criticism.
From what I've read of recent fMRI studies, while we think we're assessing all the variables (e.g the traffic when we're driving) and reaching decisions, in truth the assessment and the decisions are all made at a 'subconscious' level, with the conscious brain only being sent a rough summary some time (0.5-2 seconds) later. Because of this relatively short delay, the conscious brain confuses correlation with causation, and thinks it is in control, whereas it's more like a post hoc observer.
This fact, along with the fact that we see an iconic model of the world, has completely changed my worldview.
I don't see the world: I see a model of the world made by my brain.
I don't make decisions: my 'subconscious' makes decisions and tells me about them when it has time.
____
"If you ain't a marine then you ain't *beep*
Re: Suppose we create true artificially intelligent machines….
. . .my 'subconscious' makes decisions and tells me about them when it has time.
That's all it means to say that "you" make decisions.
Re: Suppose we create true artificially intelligent machines….
True, but somehow my subconscious doesn't feel as much like me as my conscious self. I can usually justify my conscious decisions and articulate why I made them, whereas my subconscious feels like it could be piped in, via a wire, from who knows where!
Also, consider a phrase like "he considered the alternatives carefully." If he just barked out "Option 2" it would be unusual to say that "he considered the alternatives carefully." Normally, that phrase is reserved for people who seem to consciously wretle with a problem.
Perhaps our language simply needs to be revised to reflect these new empirical findings. Still, it is clear that there's a gap between our language and our lack of appreciation for unconscious thought, and reality!
____
"If you ain't a marine then you ain't *beep*
Also, consider a phrase like "he considered the alternatives carefully." If he just barked out "Option 2" it would be unusual to say that "he considered the alternatives carefully." Normally, that phrase is reserved for people who seem to consciously wretle with a problem.
Perhaps our language simply needs to be revised to reflect these new empirical findings. Still, it is clear that there's a gap between our language and our lack of appreciation for unconscious thought, and reality!
____
"If you ain't a marine then you ain't *beep*
Re: Suppose we create true artificially intelligent machines….
Try dancing, sports or perhaps meditation. It is you thinking - your conscious self - that comes up with the mistrust against the subconscious. To let things flow freely without thinking can not only be very pleasant: it can also feel like true freedom, like the only meaning "being oneself" could possibly have.
Laozi, Tao Te Ching, Chapter 7
Heaven and Earth endure,
By not endowing themselves with life.
Then they can be long-lived.
So the wise place Self last,
And it comes first,
Call it other than themselves,
And it persists.
By not thinking of Self
The personal goal is achieved.
Re: Suppose we create true artificially intelligent machines….
I would honestly love to dance, but a car accident in my teens has made many appealing activities impossible. Still, I do what I can. I love the Zen (if that's the right word) of swimming, not that I'm very good.
In the end, I wound up as a mathematician, which may have made me even more suspicious of my subconscious. (Maths is an oddly schizophrenic profession. Usually solutions well up from your subconscious as a joyous moment of intuition. Once that's happened, you have to spend weeks picking the intuition apart for flaws, and proving that it's right - line by brutal line.)
____
"If you ain't a marine then you ain't *beep*
In the end, I wound up as a mathematician, which may have made me even more suspicious of my subconscious. (Maths is an oddly schizophrenic profession. Usually solutions well up from your subconscious as a joyous moment of intuition. Once that's happened, you have to spend weeks picking the intuition apart for flaws, and proving that it's right - line by brutal line.)
____
"If you ain't a marine then you ain't *beep*
Re: Suppose we create true artificially intelligent machines….
Also, consider a phrase like "he considered the alternatives carefully." If he just barked out "Option 2" it would be unusual to say that "he considered the alternatives carefully." Normally, that phrase is reserved for people who seem to consciously wretle with a problem.
Yeah, but even in the case of carefully considered options, subconscious processing happens before conscious awareness of whats being considered and when. In experiments where subjects are asked to make choices while their brains are being scanned, we can often tell which choice they are going to make as far as ten seconds in advance of them being aware of having made a choice!
Perhaps our language simply needs to be revised to reflect these new empirical findings.
I think ordinary language works just fine for what it is meant to accomplish. Its just that were weirder, more complex beings than casual intuition would suggest.
Re: Suppose we create true artificially intelligent machines….
Yeah, but even in the case of carefully considered options, subconscious processing happens before conscious awareness of whats being considered and when. In experiments where subjects are asked to make choices while their brains are being scanned, we can often tell which choice they are going to make as far as ten seconds in advance of them being aware of having made a choice!
Yes, I mentioned this point in an earlier post. That's why I suggested that we need some new words to reflect what's really happening more accurately.
I suppose that when I pictured my "carefully considered" reply, it involved the person selecting the right tool for the job (e.g. game theory if you're a prisoner), assigning payoffs to the various outcomes and then choosing the optimum strategy. For me, a "carefully considered" response might involve going to the library to learn about it (if they didn't already know it) or even inventing it, like von Neumann & Morgenstern. When "careful consideration" reaches lengths this extreme, I have trouble seeing how my subconscious could help - especially if I wasn't acquainted with the right tools, or hadn't invented them!
Still, I take your point. I doubt that there are just two types of thinking - conscious and sunconcious. I suspect it might be closer to a continuum.
Re: Suppose we create true artificially intelligent machines….
I believe true AI is a wishful fantasy. It is not even possible in principle. Natural substances - composites of form and matter - have an intrinsic directedness about them - causal powers that "point to" their effects or manifestation. AI seeks to create intelligence from natural substances that have no such natural teleology - like a watchmaker forms and constructs a watch for components like metal or plastic - extrinsically imposing functionality on them for a purposes they were not naturally "created" for. Depictions in entertainment (Terminator, West World) about machines soon becoming conscious, intentional, rational, emotional, etc., i.e, acting like humans are far-fetched and ridiculous oversimplifications of the complexity of the human substance, in mind, brain, and body. It is mere fantasy and nothing more. Granted I'm no expert or even intermediate on AI related research - my antagonistic viewpoint towards it is philosophical - but as a coder myself, I know that programmable machines can only do what you code them to do. They don't go "outside of the bounds" of their programming.
I want a unicorn.
I want a unicorn.
Re: Suppose we create true artificially intelligent machines….
Natural substances - composites of form and matter - have an intrinsic directedness about them - causal powers that "point to" their effects or manifestation.
Of course, this claim has never been documented or established as true in the entire history of philosophy and science. Until it is, researchers in AI can go on justifiably believing that eventually they may be able to build something that has genuine consciousness.
. . .extrinsically imposing functionality on them for a purposes they were not naturally "created" for.
As I mentioned in another thread, the option is always available to see all functionality and teleology as imposed and interpreted. None of it is intrinsic.
Re: Suppose we create true artificially intelligent machines….
Of course, this claim has never been documented or established as true in the entire history of philosophy and science. Until it is, researchers in AI can go on justifiably believing that eventually they may be able to build something that has genuine consciousness.
Sure it has been documented and has been argued for. It is implicit in Aristotle's notion of final causality and on arguments that build upon it like Aquinas' Fifth way. Contemporary philosophers like the late George Molnar have also argued for what they call "physical intentionality" and a powers ontology, which is effectively the same thing.
Regardless, whether it has or hasn't doesn't preclude researchers from doing whatever they want, since researchers are human, and humans have free will.
I want a unicorn.
Re: Suppose we create true artificially intelligent machines….
Sure it has been documented and has been argued for. It is implicit in Aristotle's notion of final causality and on arguments that build upon it like Aquinas' Fifth way.
That philosophers and theologians have invented strange and eccentric properties over time is irrelevant. What matters is whether the property you referred to has been established as something that genuinely exists and is worth thinking about, and the answer to this is a resounding "No!"
Yet this didn't stop you from declaring it as if it were an established fact, when nothing could be further from the truth. It is just a weird little position that a handful of philosophers have adopted, nothing more. And it is worth noting that almost all talk of such properties predates a more modern and mature understanding of science and the workings of nature. This is no accident.
The option still exists to see all functionality and all teleology as something we impose on elements of the world for pragmatic reasons. As long as doing so is consistent with everything that can be measured and detected, there is no reason to add such properties as "intrinsic directedness" (I think you meant "intrinsic intentionality") to our zoology of concepts. At least, no reason that you have put forward.
Re: Suppose we create true artificially intelligent machines….
That philosophers and theologians have invented strange and eccentric properties over time is irrelevant.
There's nothing "strange" or "eccentric" about the notion of final causality. It's one of the four causes (explanations) that are required as argued for by Aristotle to give an intelligible - and hence full - explanation of being. Without the idea of final cause, causal regularities brought about by efficient causes become unintelligible. That cause A (a match head) when struck on sandpaper (efficient cause) produces B (fire) every time unless obstructed or impeded (it is wet), instead of C (smell of flowers) or D (a loud sound), is cause to believe that there is something intrinsic to A that is "directed to" or "points to" B as its final cause, instead of C or D. Just because modern atheistic "philosophers" have simply dismissed it and assumed a mechanistic-cum-materialistic philosophical view of Reality doesn't dismiss it or necessitate that such a notion has the properties that you have suggested.
Yet this didn't stop you from declaring it as if it were an established fact,
I never once used the term "fact" in anything I said. In fact, I stated that my belief that AI is wishful fantasy is philosophical: considering matter only and nothing else, any attempt to mimic a human brain even down to the microscopic level with artificial substances that are not naturally intended for all the purposes that the human brain serves is metaphysically doomed to failure. That means the matter that makes up a being is functionally irrelevant.
It is just a weird little position that a handful of philosophers have adopted, nothing more.
The number of people (or lack thereof) that hold a belief doesn't impact its validity one bit.
And it is worth noting that almost all talk of such properties predates a more modern and mature understanding of science and the workings of nature. This is no accident.
"Empirical" science is merely descriptive and presupposes that causality as such exists in Reality. That modern science has allowed human beings to delve into more fundamental levels of it doesn't affect Aristotelian metaphysics one bit.
I want a unicorn.
Re: Suppose we create true artificially intelligent machines….
There's nothing "strange" or "eccentric" about the notion of final causality.
To anyone who has embraced the language of modern science, such terms are indeed extremely strange and eccentric.
And by the way, the subject is specifically your claims about "intrinsic intentionality", not Aristotle's centuries-out-of-date understanding of causality.
Without the idea of final cause, causal regularities brought about by efficient causes become unintelligible.
Chemists and physicists seem to perform just fine without ever using the terms of Aristotle and the way they picture the world. I would argue they couldn't do their jobs if they did.
That cause A (a match head) when struck on sandpaper (efficient cause) produces B (fire) every time unless obstructed or impeded (it is wet), instead of C (smell of flowers) or D (a loud sound), is cause to believe that there is something intrinsic to A that is "directed to" or "points to" B as its final cause, instead of C or D.
No one shackled with this pre-scientific understanding of nature could ever understand why the match head lights or even what combustion is.
You won't get to that place until you replace this primitive vocabulary with a picture of the world as containing elements which interact with each other, with almost nothing being intrinsic to anything, but rather only ever emerging from such interactions.
Just because modern atheistic "philosophers" have simply dismissed it and assumed a mechanistic-cum-materialistic philosophical view of Reality doesn't dismiss it or necessitate that such a notion has the properties that you have suggested.
That's a very awkwardly constructed sentence, but if I've successfully pried your intended meaning from it, the reply is that the "mechanistic-cum-materialistic philosophical view of Reality" simply works better than any proposed alternative, and since it works without bizarre concepts like "intrinsic intentionality", the burden is on those who think such ideas are useful to demonstrate exactly how they can be useful.
I never once used the term "fact" in anything I said.
You didn't need to. You just announced that "Natural substances - composites of form and matter - have an intrinsic directedness about them" as if this were an established truth rather than a controversial outlier. I know that in reality you are far from an authoritarian, but that move reeks of authoritarianism.
Contrast that with the language I use, where there is talk of alternative "options" and people going about their work until being shown how ideas like "intrinsic intentionality" be made helpful.
My focus, as always, is on what is pragmatically useful for human ends. By asking you to recast your ideas in a manner where their utility can be exposed and understood, at least I open the door to their being part of the human project. Don't stand on the sidelines and dismiss an entire human enterprise (A.I.) on the basis of concepts that no one knows what to do with. Show how "intrinsic intentionality" can be part of the conversation among people who want to understand what consciousness is in the natural world.
. . .any attempt to mimic a human brain even down to the microscopic level with artificial substances that are not naturally intended for all the purposes that the human brain serves is metaphysically doomed to failure.
If we reach a point where artificially intelligent machines display a good chunk of the range of verbal and behavioral interactions that humans do (Data from Star Trek, but even something lesser), then I think we will be able to justifiably say that "metaphysical failure" means nothing, and that the real successes can and must be defined by what we pragmatically experience here in the real world.
Thought experiment for you:
A 1987 issue of Behavioral and Brain Sciences contains an article on the neurology behind the prey catching behaviors of toads.
Let's take a look at how the article's author (Ewert) interpreted two of several distinct neuronal events he was able to measure and identify. A T5.2 firing means "stimulus recognized as prey, n degrees outside the fixation area". Activity in T4 means "stimulus moving somewhere in the visual field." When both happen, a command to orient towards the object is executed as an output to the toad's muscles.
Imagine that the toad's brain is cut into dozens of pieces, each of which is kept alive in a special fluid. Network T5.2 and T4 are hooked up to a computer system, becoming in effect two of its processing chips. They obviously can't run code, but the designers of the computer have hooked them up mechanically to the rest of the system so that their normal behaviors as tiny neuronal networks can be harnessed. They will become excited by a "preferred stimulus" and issue an output when the patterns of that stimulus are present. The so called "intended" intentionality of these neuronal networks "pointed" to bugs flying around in their visual fields. Now their intentionality "points" to the status of specific stocks.
Thoughts? What do you think we are "allowed" to say about the intentional states of T5.2 and T4 in their new context? Would it make a difference if they were put into another frog's brain to do the same job they were originally doing? Would it make a difference if they were put into another frog (or the same frog)'s brain to do another job with another intentional "object"?
"Empirical" science is merely descriptive and presupposes that causality as such exists in Reality.
A. It does not in any sense presupposed the primitive notions of causality we find in Aristotle.
B. In physics, there is widespread contention over whether causality exists at all. I have no opinion on this ongoing debate, because I've never thought that debates on the nature of causality have ever been relevant to any subject I've studied.
I just bring it up because it points out your constant bad habit of simply assuming that very specific positions you happen to favor have already been established the way you want them to be. They have not.
That modern science has allowed human beings to delve into more fundamental levels of it doesn't affect Aristotelian metaphysics one bit.
For me, all the important influences need to run the other direction or they are without meaning to any human project: what can Aristotelian metaphysics contribute to science?
If it consists of nothing other than philosophers on the sidelines telling scientists and engineers that their projects have failed or are doomed without doing so by appealing to the vocabularies and success metrics scientists and engineers recognize, those philosophers can, should be, and will be, ignored.
Re: Suppose we create true artificially intelligent machines….
To anyone who has embraced the language of modern science, such terms are indeed extremely strange and eccentric.
How convenient. First of all, there is no "language" of modern science. Second of all, even if there was, who says you've embraced it? Third of all, what an impotent attempt at creating some imaginary partition between "modern science" and anything that I've said in a lame attempt to place yourself within its boundaries, while trying to cast me outside of it. You do this in every response to someone, where you latch whatever you say onto the scientific enterprise as if its mere mention gives whatever you say some semblance of credibility.
And by the way, the subject is specifically your claims about "intrinsic intentionality", not Aristotle's centuries-out-of-date understanding of causality
I know what the subject is about. You clearly don't, because the entire talk of "intrinsic physical intentionality" is effectively a return to the concept of final causality as understood in Aristotelian metaphysics. Truth has never been temporally relative. To even speak of it in such a manner is to cast the entire concept into doubt.
Chemists and physicists seem to perform just fine without ever using the terms of Aristotle and the way they picture the world. I would argue they couldn't do their jobs if they did.
So what? That's because the scientific method is a methodology first and foremost. It doesn't require philosophy even though it implicitly presupposes it. Empirical sciences such as physics partition physical Reality off, model and give a descriptive account of it, imposing an abstract mathematical structure onto it for purposes of measurability, rigorous prediction, and technological application. They don't even attempt to give a full account of it, let alone an ontology of it, since this is in the realm of philosophy.
That's a very awkwardly constructed sentence, but if I've successfully pried your intended meaning from it, the reply is that the "mechanistic-cum-materialistic philosophical view of Reality" simply works better than any proposed alternative, and since it works without bizarre concepts like "intrinsic intentionality", the burden is on those who think such ideas are useful to demonstrate exactly how they can be useful.
It doesn't "work" on anything. It's a philosophical presupposition that can't even remotely explain the regularities that exist in Nature. That's why it appeals to something like the "laws of Nature" which it simply ripped off from the philosophy of occassionalism. The entire idea is a theological one, except now decapitated from the Godhead that gave it its power and agency, which makes it explanatorily impotent and a effectively a tautology.
Your post is effectively nothing but obfuscating irrelevance with pejorative claims of temporality and out-datedness that mean nothing. The fact that you even mention that your motives are for "pragmatically useful human ends" - whatever pragmatic or useful even means in such a context - within a thread that mentions the concept of final causality as such existing in Nature is beyond irony.
I want a unicorn.
Re: Suppose we create true artificially intelligent machines….
First of all, there is no "language" of modern science. Second of all, even if there was, who says you've embraced it?
1. You are taking me too literally. Read any scientific paper that would be germane to the topic of Aristotle's treatment of causality. Scientists simply do not discuss or think about the world in his terms (nor should anyone, frankly). They do not literally have their own language, but they have their own vocabularies or ways of discussing reality.
2. Of course Ive embraced it.
They don't even attempt to give a full account of it, let alone an ontology of it, since this is in the realm of philosophy.
I challenge the very idea that a purely philosophical account of nature actually adds a single thing of value or makes such accounts more full than what science gets us.
It doesn't "work" on anything. It's a philosophical presupposition that can't even remotely explain the regularities that exist in Nature.
Modern science is predicated on the consequences of this ontology being true in the sense of being useful. And science has produced more of value to the human project than anything philosophers like Aristotle ever gave us.
The entire idea is a theological one, except now decapitated from the Godhead that gave it its power and agency, which makes it explanatorily impotent and a effectively a tautology.
Power and agency? What a joke. Give me evolutionary biology, medicine, ever-more powerful computers and all the other treasures we got ourselves once natural philosophy began to turn its back on religion and Medieval scholarship. You can have your power and agency, which mean absolutely nothing to me.
The fact that you even mention that your motives are for "pragmatically useful human ends" - whatever pragmatic or useful even means in such a context. . .
Ideas, notions, and conceptions that people can use for whatever projects they are interested in because they permit us to do things we couldnt do otherwise. Thats my litmus test for philosophy or philosophical ideas having value.
Thus, I have no time at all for "pure" philosophy and will reject any statment coming from such an enterprise that casts judgment on scientfic or engineering projects. "Pure" philosophy has absolutely zero credibility or authority to say anything about such subjects.
That's why I keep asking you to put a spin on intrinsic intentionality that is actually useful for any human purpose or makes a measurable difference to reality if it is true, but you wont even try.
I suspect it is because it has no such potential. And that to me is the hallmark of a bad philosophical idea, one that must and should be discarded.
Disappointed you didnt say anything about the thought experiment with re-purposed frog brain parts and what it says (if anything) about their intentionality.
Re: Suppose we create true artificially intelligent machines….
1. You are taking me too literally. Read any scientific paper that would be germane to the topic of Aristotle's treatment of causality.
Scientists simply do not discuss or think about the world in his terms (nor should anyone, frankly). They do not literally have their own language, but they have their own vocabularies or ways of discussing reality.
There you go again. Generalizing the entire scientific enterprise to conform to your perception of it. Most scientists are not philosophers and this is a forum about philosophy, not science. There is a separate forum dealing with that subject.
2. Of course Ive embraced it.
So what? Philosophy and empirical science are different categories.
I challenge the very idea that a purely philosophical account of nature actually adds a single thing of value or makes such accounts more full than what science gets us.
What are you going on about? Pointing out facts about the scope of a particular discipline (empirical science) doesn't necessitate that a different one (philosophy) becomes exhaustive. This is a complete straw-man. I never once asserted that philosophy gives a "full account" of Nature inclusive of empirical details.
Modern science is predicated on the consequences of this ontology being true in the sense of being useful.
No. Modern science is predicated on the notion that it accurately models change in Reality for the purposes of predictability. It presupposes Aristotelian metaphysical notions of causality as such existing in the Universe. Engineering uses scientific research and results to utilize them for practical and technological application.
And science has produced more of value to the human project than anything philosophers like Aristotle ever gave us.
It has also introduced its own set of problems. Regardless, this mere opinion has absolutely nothing to do with my OP.
Power and agency? What a joke. Give me evolutionary biology, medicine, ever-more powerful computers and all the other treasures we got ourselves once natural philosophy began to turn its back on religion and Medieval scholarship. You can have your power and agency, which mean absolutely nothing to me.
None of modern science has anything to do with natural philosophy "turning its back" on religion and Medieval scholarship. This is sheer myth propagated by the anti-religious that has absolutely no basis in Reality, as if the innumerable religious figures who have contributed to the scientific enterprise over centuries never existed. Give me a f!cking a break with this utter trash.
I want a unicorn.
Re: Suppose we create true artificially intelligent machines….
Most scientists are not philosophers and this is a forum about philosophy, not science.
In Aristotle's time and for many centuries since, there was no such thing as a distinction between science and philosophy. That largely artificial division of labor came later. He and subsequent early philosophers were making the first baby steps towards what would evolve into science, so the fact that we only got good at understanding such things when we stopped talking and thinking like them is very important.
Philosophy and empirical science are different categories.
Not as different as you like to pretend they are.
This is a complete straw-man. I never once asserted that philosophy gives a "full account" of Nature inclusive of empirical details.
I'll raise you another straw man!
You of course didn't say philosophy gives a "full account" of nature, nor did I accuse you of doing so. You merely implied that philosophy was needed to give a fuller account of nature than what science alone gives us, and I don't see how that could be possible.
Maybe that's just the limits of my own imagination showingso enlighten me with a solid, specific example of philosophy successfully going the extra mile to flesh out what science can't provide.
No.
Yes.
Modern science is at least methodologically predicated on materialism, pure and simple. (And as a philosophical pragmatist, methodology is the only important game in town for me.)
And materialism, if you recall, is the assumption that the only entities, forces, and properties in the universe are those that physics studies or could in principle study, or are those built up from such entities, forces, and properties. Every successful tool, act, assertion, and debate in modern science is confined to the assumption of this reality, explicitly or not. I challenge you to provide a single counter example.
It presupposes Aristotelian metaphysical notions of causality as such existing in the Universe.
A. Find me a modern textbook of science where those metaphysical notions of causality are mentioned.
B. The fact that large numbers of serious, respected physicists believe that causality is an illusion shows that this statement is simply false. Whether they are right or wrong, people are doing successful physics without the aid of Aristotelian metaphysical notions of causality, or any notion of causality at all.
None of modern science has anything to do with natural philosophy "turning its back" on religion and Medieval scholarship. This is sheer myth propagated by the anti-religious that has absolutely no basis in Reality, as if the innumerable religious figures who have contributed to the scientific enterprise over centuries never existed.
You obviously know almost nothing about the actual history of natural philosophy, which is what transformed into what we now call science during the Enlightenment. That you actually think the fact that so many Enlightenment scholars were religious means anything at all is very telling.
If you look at the early days of natural philosophy and see how these people write, the entire discipline is rooted in Judaeo-Christian assumptions about the world as seen through the prism of Greek philosophers and their Medieval interpreters.
If you read the heated arguments scholars were having with each other during the days of the Enlightenment, one theme is central: that there was a clear movement away from those traditional ways of understanding nature and towards the radical notion that men could understand nature through the force of their own, independent, secular reason.
It may seem crazy to us now, but Christians like Descartes and their supporters were constantly denounced as atheists. If you read excerpts from the pamphlets and screeds that were being circulated after his death, time and time again this new way of thinking which was emerging from scholars in his wake was always linked to denial of religion. Religion was so clearly threatened that many of these scholars risked banishment, imprisonment, or fines by religiously motivated authority figures.
Don't take my word for it, check out "Radical Enlightenment: Philosophy and the Making of Modernity 1650-1750" by Jonathan I. Israel. Almost every page consists of scholars and religious authorities in those times arguing with each other, in their own words, and the recession of religiously dominated ways of thinking is their chief obsession.
Of course most Enlightenment thinkers (sincerely!) protested that they were not advancing an atheist agenda. But of course, they were, and their critics knew it even if they didn't.
The further along you go, the less God, the soul, or any Greek philosopher plays any kind of role in the texts of natural philosophers, and then later scientists. And now that role is at absolute zero, even when the authors are deeply religious.
This is no accident. And there is no need for atheists to produce propaganda to promote this view. The factsthe very words and concepts chosen by these authorsspeak for themselves.
Re: Suppose we create true artificially intelligent machines….
In Aristotle's time and for many centuries since, there was no such thing as a distinction between science and philosophy.
So, because you don't understand the distinction between science and philosophy in Aristotle, you think there's no distinction funny.
I always thought you were just the typical arrogant pretentious AP undergraduate, but now I'm surprised to see you're even below that. It's not that you think AP is philosophy, you don't have a clue of what philosophy is.
Back to the ignore list now. I hope you're enjoying Transformers.
Re: Suppose we create true artificially intelligent machines….
So, because you don't understand the distinction between science and philosophy in Aristotle, you think there's no distinction funny.
Pretty please, cite that distinction as it occurs in his work.
Oopsthere wasn't even a word for "science" then, nor would there be for many centuries to come.
Back to the ignore list now.
Yep, go running for the hills with your tail between your legs, as usual. Pathetic.
Re: Suppose we create true artificially intelligent machines….
Pretty please, cite that distinction as it occurs in his work.
What for? So you can google it and pretend to already know it? OK. Epistemê and nous. Good luck.
Oopsthere wasn't even a word for "science" then, nor would there be for many centuries to come.
Boy you're really weird. Right after I realized you're not just an AP moron, your counterargument is precisely what I would expect from one of them? Ironically, your very inability to understand the distinction between thought and language reflects your ignorance on the subject itself (hint, hint!).
Funny you're expecting to find Aristotle referring to something he invented using the exact same words we do today? That's a weird studying method. At least that explains why you don't even know the distinction.
How exactly you studied Aristotle, if you actually did, which I doubt? I'm curious. You read the index, couldn't find anything that had a knee-jerk effect on you and dismissed it completely? That's how my dog reads. Maybe you could be friends. Or maybe your college professor told you that everything Aristotle said was wrong and to never waste time reading anything written before the 17th century, you believed him and still feel superior about that today?
Yep, go running for the hills with your tail between your legs, as usual. Pathetic.
Oh no! A complete anonymous who is known for being an insecure pretentious and arrogant prick is trying to embarrass me into a discussion with him by using childish appeals to my intellectual vanity! What should I do? Take him seriously and waste time arguing with him while he pretends superiority and acts like he knows things he doesn't know, or ignore him and taunt him every once in a while for my personal entertainment? Tough choice.
I'll stick to the second one, keeping you as a pet troll for my personal entertainment. In the box now. See you later.
Re: Suppose we create true artificially intelligent machines….
Epistemê and nous. Good luck.
Yep, that's you falling flat on your face, just as expected.
The distinction between philosophy and science simply did not exist in the time of Aristotle, and that's just a fact. You are basically engaging in the same kind of pathetic desperation that one finds in idiotic Christians who try to spin Genesis into a scientifically viable text.
Ironically, your very inability to understand the distinction between thought and language reflects your ignorance on the subject itself (hint, hint!).
The very idea that we have a coherent notion of "thought" that is distinct from language is in contention. But one can hardly expect a blow-hard know-nothing like you to be in anything but denial over this troublesome fact.
Funny you're expecting to find Aristotle referring to something he invented using the exact same words we do today?
A. If you think Aristotle invented science you are even more of an ideological nutcase than I first thought.
B. No one is claiming that Aristotle wrote in English. Pretty desperate of you to suggest this, isn't it? Making up idiotic straw men is so, so much easier than thinking.
How exactly you studied Aristotle, if you actually did, which I doubt?
Nothing beyond what was required as an undergraduate. That's all that is needed to understand how completely full of manure your are.
Or maybe your college professor told you that everything Aristotle said was wrong and to never waste time reading anything written before the 17th century, you believed him and still feel superior about that today?
No one told me that ancient philosophers had nothing meaningful to contribute to the subjects I was interested in. I figure that out by myself, along with most professionals working in the field today, who barely mention any of them.
Re: Suppose we create true artificially intelligent machines….
Nothing beyond what was required as an undergraduate.
Thank you. As I said from the beginning, you only have superficial knowledge of the subject and simply don't know the distinction, and that makes you think there's none. After lots of shenanigans, arrogance and pretentiousness, you prove my point.
Re: Suppose we create true artificially intelligent machines….
As I said from the beginning, you only have superficial knowledge of the subject and simply don't know the distinction, and that makes you think there's none.
Never said there was none, moron. Try reading with more care, if you are able.
Re: Suppose we create true artificially intelligent machines….
Please, don't be ridiculous. Even if you're not just splitting hairs now, you already admitted your ignorance, so there's nothing to argue about.
It's funny how your attitude says more about the current state of philosophy courses than about yourself. You claim to be a "philosopher", yet you obviously don't even understand what that means. You dismiss sixteen centuries of philosophy by resorting to childish insults and argumentum ad populum, and then try to hide your ignorance about that by being arrogant. The people who take you seriously just get pissed off by your attitude and leave, and you pretend superiority by proclaiming "victory", just like a drunk trying to pick up a fight in bar.
Curiously, your attitude and even some of the insults you use make you sound just like Daniel Dennett. When you're talking about subjects you know nothing about, you resort to arrogance and pretentiousness to hide your ignorance. When you discuss subjects where you actually have some knowledge, you dismiss any disagreement with childish insults. Considering your enthusiasm for his ideas about consciousness, I'd guess you probably read too much of him and unconsciously absorbed his "style". That's comical, coming from someone who fancies himself as a "bright", rational individual, impervious to the influences suffered by ideological nuts like me.
It's funny how your attitude says more about the current state of philosophy courses than about yourself. You claim to be a "philosopher", yet you obviously don't even understand what that means. You dismiss sixteen centuries of philosophy by resorting to childish insults and argumentum ad populum, and then try to hide your ignorance about that by being arrogant. The people who take you seriously just get pissed off by your attitude and leave, and you pretend superiority by proclaiming "victory", just like a drunk trying to pick up a fight in bar.
Curiously, your attitude and even some of the insults you use make you sound just like Daniel Dennett. When you're talking about subjects you know nothing about, you resort to arrogance and pretentiousness to hide your ignorance. When you discuss subjects where you actually have some knowledge, you dismiss any disagreement with childish insults. Considering your enthusiasm for his ideas about consciousness, I'd guess you probably read too much of him and unconsciously absorbed his "style". That's comical, coming from someone who fancies himself as a "bright", rational individual, impervious to the influences suffered by ideological nuts like me.
Re: Suppose we create true artificially intelligent machines….
Even if you're not just splitting hairs now, you already admitted your ignorance, so there's nothing to argue about.
Yes, I suppose that to someone like you, correcting your mistakes about my actual views would seem like "splitting hairs". Got it: you have little concern with facts.
You dismiss sixteen centuries of philosophy by resorting to childish insults and argumentum ad populum, and then try to hide your ignorance about that by being arrogant.
I dismiss sixteen centuries of philosophy when it comes to the study of consciousness, like virtually every living philosopher who studies the subject, because I'm smart enough to recognize that most of them didn't even know how to begin thinking about consciousness. For the most part, they have nothing useful or interesting to contribute to the topic, period.
Go ahead and worship them. I prefer to focus on the cutting edge of scientific and philosophical progress.
Re: Suppose we create true artificially intelligent machines….
Yes, I suppose that to someone like you, correcting your mistakes about my actual views would seem like "splitting hairs". Got it: you have little concern with facts.
Read again. I said "even if you're not just splitting hairs", meaning it doesn't make a difference what you said. What matters is you already admitted you only have superficial knowledge and wouldn't know what that difference is.
If I'm wrong, stop the tergiversation and explain. But you won't. You will just keep pretending superiority like you always do.
because I'm smart enough to recognize that most of them didn't even know how to begin thinking about consciousness.
You're smart enough to know what they knew without studying them? Sounds more like you're confusing intelligence with arrogance and delusion, as you always do.
For the most part, they have nothing useful or interesting to contribute to the topic, period.
Why exactly? Can you even explain Aristotle's theory of consciousness, or are you also going to claim he had none just because you read that on Richard Rorty?
Obviously you can't explain that, and you can't even understand what they might have to contribute because you already discarded that when you made your choice to believe in materialist dogma and confuse consciousness with its side-effects.
You're entitled to an opinion, obviously, and that opinion is based on the superficial readings you made as an undergraduate and your naive belief in the caricature you built yourself.
Go ahead and worship them. I prefer to focus on the cutting edge of scientific and philosophical progress.
Progress? How do you know you're making progress if you don't know where it ends? It's really funny how you fancy yourself as the apex of rational thought but you're always making elementary mistakes like that. Just like you confuse truth with utility and justify that error by claiming you're a "pragmatist" you're now confusing truthfulness with novelty and calling yourself a "progressist"?
Seriously, someone who describes himself as an anything "-ist" when his opinions are challenged is the one who is worshipping something unworthy of worship. I guess when you accidentally step on *beep* you say "That was no mistake! I wasn't confused! I'm just a *beep*ist.
Re: Suppose we create true artificially intelligent machines….
You're smart enough to know what they knew without studying them?
I'm smart enough to have figured out, very quickly, that the ancients were not, for the most part, thinking about consciousness in a way that I found valuable or informative. I do not have infinite amounts of time, so it doesn't take much for me to decide that the most prudent action is to cut and run.
You can pretend all you want that this is just a personal quirk of mine, but the fact is that regardless of their individual positions, most modern philosophers of mind share my lack of interest.
Obviously you can't explain that, and you can't even understand what they might have to contribute. . .
No one is stopping you from articulating the great contributions the ancients have made to the understanding of consciousness. That you would prefer to just froth at the mouth like a maniac suggests to me that you can't.
. . .because you already discarded that when you made your choice to believe in materialist dogma and confuse consciousness with its side-effects.
Please, document the difference between "real" consciousness and the side effects I've "confused" it with. This should be entertaining. (Except we both know you won't and can't.)
Progress? How do you know you're making progress if you don't know where it ends?
Except I do know where it "ends", though we'll never get there: a complete narrative of causes and effects leading from sensory inputs to outputs when subjects tell us they are experiencing this or that conscious event (the science part) with clarity and zero metaphysics when dealing with the concepts we employ in discussing such events (the philosophy part).
Just like you confuse truth with utility and justify that error by claiming you're a "pragmatist" . . .
Which, of course, simply begs the question against the pragmatist. But hey, why argue like a rational adult who actually gives a crap when you can just mindlessly rant?
Re: Suppose we create true artificially intelligent machines….
You obviously know almost nothing about the actual history of natural philosophy, which is what transformed into what we now call science during the Enlightenment.
I suggest you read this for a more informed and more thorough account of such a history and the differences between metaphysics, philosophy of nature, and empirical science as both understood from a classical and modern perspective:
Re: Suppose we create true artificially intelligent machines….
I suggest you read this for a more informed and more thorough account of such a history and the differences between metaphysics, philosophy of nature, and empirical science as both understood from a classical and modern perspective
Thanks, I will read this later and respond Wednesday!
But the fact still remains that when you read the actual words of scholars who lived through and shaped the Enlightenment, they were keenly aware that the new ideas being birthed were fundamentally antithetical to the religious and Medieval traditions that preceded them, and represented a serious slap in the face to the assumptions of those traditions.
Re: Suppose we create true artificially intelligent machines….
Okay, I've read the article and found it very informative. My thinking and education are thoroughly grounded in modern philosophy so it was very interesting to see how these three approaches and traditions stand in relation to each other and to the world.
Semi random thoughts:
1. Informative a blog post as it was, it really didn't contradict my claim that the Enlightenment was, as matter of historical fact, a turn away from traditionally religiously dominated ways of talking about reality towards a vocabulary where religious terms and thinking had no role to play.
2. There is a point I've been struggling to articulate and it has to do with how the passage of history perhaps encourages some to look at the fossilized remains of older traditions and artificially, retroactively fashion them into "schools" or "approaches" that have fundamentally different aims than what we find in modern science.
I think there is some room for the view that these thinkers were struggling to understand reality and doing the best they could with the tools they had, and that the emergence of modern philosophy during the Enlightenment is best seen as an improvement on their techniques and goals, not an alternative that went off into an entirely new direction and exists side by side with them. Rather, if understanding reality is your goal, your time is best spent looking at what scientists and scientifically focused philosophers have to say, because they figured out how to do all the stuff the ancients were trying to do, and do it better.
Here is another way of putting it. Philosophers of nature may tell us that they are "concerned with deeper questions for example, with what has to be true if there is to be any causality at all, or any material substances at all" and that this is a different project than "merely" determining what causes or substances just "happen" to exist.
To which the modern materialist can reply, "We think the only way of ever answering the questions the philosopher of nature wants to ask is by doing physics and cosmology with a healthy dose of philosophy, whose role in this case is to clarify and tighten up concepts. We don't think anyone sitting in an armchair is ever going to even come close."
Now this kind of response could be seen as question begging, and I believe Feser even says as much at one point. Hold that thought.
3. Your blogger ends the post with a lot of snark. I especially loved this passage:
Of course, I'm exactly the sort of philosopher his concluding paragraphs are directed towards, as well you know. And I agree with his later statement that "you can't escape philosophy". That's why I went into philosophy rather than, say, neurology.
Here's the thing. The scientifically minded, materialist approach to understanding reality is an interwoven web of interdependent methods and values, and the most important of those values is rooted in a form of philosophical pragmatism. It isn't as if I think a systematic, air-tight argument for these methods and values can be articulated. I don't think air-tight, systematic arguments exist outside of pure logic and mathematics.
It is merely that once a culture of people (scientists, scientifically rooted philosophers) have come to embrace these methods and values in some form or other, we simply lose interest in what the ancients or their modern defenders say about metaphysics or (for the most part) philosophy of nature. We don't see how those projects, approaches, or ways of talking help.
And yes, this means that in a sense we've decided that the kinds of things that interest us really are the only things worth thinking about when the goal is understanding reality. For us, the questions and interests of the metaphysician or philosopher of nature are either bad questions and interests (given our values), or when they are deemed by us to be legitimate, are better solved our way.
It is fine if people who practice pure philosophy of nature or metaphysics stick to talking about issues the way the ancient Greeks or Medieval apologists didafter all, there is an entire alternative culture of academics who find such discussions of immense value and interest, and as long as there are students willing to attend their classes and pay for their ideas, they will always have a place.
But when such academics think they can look over the shoulders of the scientists and scientifically rooted philosophers of the world and meaningfully comment on our projects, they can't expect to be taken seriouslyin fact, they can and should expect to be immediately dismissedunless they can articulate their critiques in light of our values and our methods.
That's why your critique of artificial intelligence in light of "intrinsic intentionality" can be safely rejected. The very concept literally makes no sense according to the materialist, mechanist paradigm A.I. operates inor at least, you haven't taken the steps you need to take in order for me to think I need to believe in it. That's why I gave you the frog brain thought experiment. I wanted to give you an opportunity to fit a version of intrinsic intentionality into the web of concepts I already possess and take seriously.
Semi random thoughts:
1. Informative a blog post as it was, it really didn't contradict my claim that the Enlightenment was, as matter of historical fact, a turn away from traditionally religiously dominated ways of talking about reality towards a vocabulary where religious terms and thinking had no role to play.
2. There is a point I've been struggling to articulate and it has to do with how the passage of history perhaps encourages some to look at the fossilized remains of older traditions and artificially, retroactively fashion them into "schools" or "approaches" that have fundamentally different aims than what we find in modern science.
I think there is some room for the view that these thinkers were struggling to understand reality and doing the best they could with the tools they had, and that the emergence of modern philosophy during the Enlightenment is best seen as an improvement on their techniques and goals, not an alternative that went off into an entirely new direction and exists side by side with them. Rather, if understanding reality is your goal, your time is best spent looking at what scientists and scientifically focused philosophers have to say, because they figured out how to do all the stuff the ancients were trying to do, and do it better.
Here is another way of putting it. Philosophers of nature may tell us that they are "concerned with deeper questions for example, with what has to be true if there is to be any causality at all, or any material substances at all" and that this is a different project than "merely" determining what causes or substances just "happen" to exist.
To which the modern materialist can reply, "We think the only way of ever answering the questions the philosopher of nature wants to ask is by doing physics and cosmology with a healthy dose of philosophy, whose role in this case is to clarify and tighten up concepts. We don't think anyone sitting in an armchair is ever going to even come close."
Now this kind of response could be seen as question begging, and I believe Feser even says as much at one point. Hold that thought.
3. Your blogger ends the post with a lot of snark. I especially loved this passage:
If you will allow to count as scientific only what is quantifiable, predictable, and controllable, then naturally and trivially science is going to be one long success story. But this no more shows that the questions that fall through sciences methodological net are not worthy of attention than the fact that youve only taken courses you knew you would excel in shows that the other classes arent worth taking.
Of course, I'm exactly the sort of philosopher his concluding paragraphs are directed towards, as well you know. And I agree with his later statement that "you can't escape philosophy". That's why I went into philosophy rather than, say, neurology.
Here's the thing. The scientifically minded, materialist approach to understanding reality is an interwoven web of interdependent methods and values, and the most important of those values is rooted in a form of philosophical pragmatism. It isn't as if I think a systematic, air-tight argument for these methods and values can be articulated. I don't think air-tight, systematic arguments exist outside of pure logic and mathematics.
It is merely that once a culture of people (scientists, scientifically rooted philosophers) have come to embrace these methods and values in some form or other, we simply lose interest in what the ancients or their modern defenders say about metaphysics or (for the most part) philosophy of nature. We don't see how those projects, approaches, or ways of talking help.
And yes, this means that in a sense we've decided that the kinds of things that interest us really are the only things worth thinking about when the goal is understanding reality. For us, the questions and interests of the metaphysician or philosopher of nature are either bad questions and interests (given our values), or when they are deemed by us to be legitimate, are better solved our way.
It is fine if people who practice pure philosophy of nature or metaphysics stick to talking about issues the way the ancient Greeks or Medieval apologists didafter all, there is an entire alternative culture of academics who find such discussions of immense value and interest, and as long as there are students willing to attend their classes and pay for their ideas, they will always have a place.
But when such academics think they can look over the shoulders of the scientists and scientifically rooted philosophers of the world and meaningfully comment on our projects, they can't expect to be taken seriouslyin fact, they can and should expect to be immediately dismissedunless they can articulate their critiques in light of our values and our methods.
That's why your critique of artificial intelligence in light of "intrinsic intentionality" can be safely rejected. The very concept literally makes no sense according to the materialist, mechanist paradigm A.I. operates inor at least, you haven't taken the steps you need to take in order for me to think I need to believe in it. That's why I gave you the frog brain thought experiment. I wanted to give you an opportunity to fit a version of intrinsic intentionality into the web of concepts I already possess and take seriously.
Re: Suppose we create true artificially intelligent machines….
Okay, I've read the article and found it very informative.
That's when you know the post is going to be entertaining.
My thinking and education are thoroughly grounded in modern philosophy
Which means it's not "grounded" at all, but based on a misconception.
the emergence of modern philosophy during the Enlightenment is best seen as an improvement on their techniques and goals
You think that because you only understand modern philosophy, therefore you have no idea of what was left behind.
not an alternative that went off into an entirely new direction and exists side by side with them.
It's an alternative that went off into a dead-end.
Rather, if understanding reality is your goal, your time is best spent looking at what scientists and scientifically focused philosophers have to say
Which means you simply don't realize the difference between understanding an abstraction of reality and understanding reality itself.
And yes, this means that in a sense we've decided that the kinds of things that interest us really are the only things worth thinking about when the goal is understanding reality.
Which means you're not really understanding reality, but just the very limited abstraction of reality you chose as your favorite, and you confuse that with reality itself.
because they figured out how to do all the stuff the ancients were trying to do, and do it better.
It's obvious you don't know what they were trying to do, how do you know modern science is doing it better?
By definition, modern science has to be stripped of everything that can't be communicated unambiguously and reproduced, and if you do that with philosophy, you end up exactly like you, thinking modern science is the evolution of traditional philosophy, when in fact it's just a very narrow subset of it, with certain metaphysical and epistemological assumptions dogmatically put into place.
Here's a simple thing that you never understood and I doubt you'll understand now, and even if you do you'll be arrogant as usual and say it's nonsense, but I'll try anyway. Traditional philosophy has nothing to contribute to modern science or modern philosophy, collectively. You're right about that, but it's not because it's obsolete or "alternative", but because they're not about finding collective unambiguously communicated reproducible knowledge. They're about personal knowledge, and that might have an effect on your understanding of reality, and might enable you to contribute to modern science or modern philosophy in ways others can't.
Traditional philosophy isn't about proving to others that you're right. It's about sincerely and honestly looking for the truth, even if you might not be able to convince others of that, and that's why it has no appeal for people like you. You're too much of an egocentric arrogant coward for that. You're not interested in the truth. You're interested in convincing others that you're right. I knew a lot of people like you, who study philosophy not for a genuine interest in the reality we live or a passion for knowledge. They study because they like to feel superior.
Re: Suppose we create true artificially intelligent machines….
Traditional philosophy isn't about proving to others that you're right. It's about sincerely and honestly looking for the truth, even if you might not be able to convince others of that, and that's why it has no appeal for people like you. You're too much of an egocentric arrogant coward for that. You're not interested in the truth. You're interested in convincing others that you're right. I knew a lot of people like you, who study philosophy not for a genuine interest in the reality we live or a passion for knowledge. They study because they like to feel superior.
Seems to me you're not talking about knowledge but faith.
Modern science is about convincing others that you're right because that's how we get things straight and clear between us so we can move on to the next step.
Faith and "personal knowledge" would have us going maybe the Earth is flat and maybe it is orbited by the Sun.
Also, this: http://www.imdb.com/board/bd0000130/nest/263626706
Re: Suppose we create true artificially intelligent machines….
Seems to me you're not talking about knowledge but faith.
No, faith is something else entirely. Faith is trust in someone's else testimony of evidence. If you're a scientist and you assume a colleague's experiment results are correct without reproducing them, you have faith in him and assume the evidence reported by him is valid as if you have seen it by yourself.
Modern science is about convincing others that you're right
Convincing others that you're right isn't the same as knowing the truth. You might know the truth and not have the means to convince anyone of it just think of someone wrongfully convicted of a crime , and you might be someone very skilled at persuasion who can convince others of things that aren't the truth think of any good politician or lawyer.
because that's how we get things straight and clear between us so we can move on to the next step.
Getting things straight and clear isn't the same as knowing the truth. That's just an heuristic to get rid of problematic issues that might hinder practical applications of some knowledge. It's a method to avoid getting stuck arguing about an issue that can't be solved now and move forward, not a method to find the truth.
By the way, the concept of Dogma in the Catholic Church is the exact same thing, although most people thing of a dogma as something imposed without any justification.
Faith and "personal knowledge" would have us going maybe the Earth is flat and maybe it is orbited by the Sun.
I think you have the wrong idea about what I mean by "personal knowledge". All knowledge is personal. There's no knowledge outside of a cognizant subject. Words written on paper aren't knowledge until somebody reads and understands them. If multiple cognizant subjects agree on something they individually know, you have a consensus, and it's easier to have others to agree on something when you can convince them of that.
Modern science is about reaching consensus. Philosophy is about searching for the truth, regardless of the consensus. The problem is that in the modern world specially in the english speaking world where "philosopher" became an academic profession and "philosophers" specially in fields determined by bureaucratic requirements in academic institutions, you won't be successful as a "philosopher" if you can't convince others. This establishment produces little creatures like Faustus5, who actually went to college and majored in Philosophy, fancies himself as a philosopher, thinks he's doing philosophy, but he's just awkwardly trying to use the scientific method to solve problems that are beyond its scope, and doesn't even realize it.
1
2
▲ Top
Suppose we create true artificially intelligent machines….
Arthur C. Clarke said:
The popular idea, fostered by comic strips and the cheaper forms of science fiction, that intelligent machines must be malevolent entities hostile to man, is so absurd that it is hardly worth wasting energy to refute it. Those who picture machines as active enemies are merely projecting their own aggressive[ness]. The higher the intelligence, the greater the degree of co-operativeness. If there is ever a war between men and machines, it is easy to guess who will start it.
Do you believe this to be accurate?
I think that we create machines in general, whether they are artificially intelligent or not, as a way of getting around our limitations. Were we to truly create A.I. would we not be making ourselves obsolete in many ways? This of course leads to thoughts of transhumanism and the like. Is this our inevitable future?
"Once you assume a creator and a plan, it makes us objects in an experiment." - Christopher Hitchens