How do you feel about artificial intelligence?


michaelplzno


1. Consciousness is non-computational. -> this directly implies that humans (which are conscious) have some kind of soul or other non-emperical mechanism by which they make decisions, like quantum fluctuations in their brain or something. Something that cannot be measured in a traditional computational way. Most of these concepts would be considered to be religious in nature.

2. Relating consciousness to AI is purely a matter of religious belief. -> only in the way that any belief is inherently religious because to believe something without any kind of emperical backing is an act of faith. Sometimes even when the data supports a conclusion faith is needed to bridge the gaps as a sort of scaffholding that is part of human thought.

But premise 1 connects to premise 2 in this premise:

3. If conciousness is non-computational it is impossible for AI to be conscious. -> That is, non-computational systems have more capabilities than computational ones. So if you are saying that the concept of human thought is beyond computation, then it cannot be that AI has human thought. But in doing that, you have said that there is a religious component to human thought because it is essentially not bound by math.

In computer science there is a term “oracle” which is used as a black box that can solve the halting problem (a non-computational problem of if a program will or will not ever terminate.) Even with a computer that has this “oracle” component in it (perhaps the human brain posseses one, why not) there would still be things that could not be computed with computers that have an oracale inside, thus creating a hierarchy of computation that goes even into a world where some kind of magic/religion/soul/god is giving you answers out of nowhere.

To believe AI is concious of course is religious because we don’t have an emperical definition of what conciousness is, and to believe that … well … any belief has at least some religious backing because that is the nature of belief without any kind of empericisim to measure it.

The turing test is relevant because it gives us a way of testing how concious a computer is in an emperical way: by running an interview and having a conversation without knowing if a human or a computer is behind the wheel. Which we would need more such information, and better tests, to know what is and is not concious as opposed to just guessing through blind faith that “hey this thing seems concious, why not?”


Logged

J-Snake


2. Relating consciousness to AI is purely a matter of religious belief. -> only in the way that any belief is inherently religious

In my explanation, a religious belief is based on arbitrary assumptions. A sound belief is based on strong indication and profound experience. So not every belief is religious.

But premise 1 connects to premise 2 in this premise:

There is still no connection because these are two orthogonal statements. Statement 2 is true independent from the truth value of statement 1. If statement 1 is true, then statement 2 is true. If statement 1 is not true, statement 2 is still true. It is important to not get confused here.

In computer science there is a term “oracle” which is used as a black box that can solve the halting problem…

It only means there is no repeating pattern and it’s orthogonal to my statements about consciousness.

The turing test is relevant because it gives us a way of testing how concious a computer is in an emperical way:

No need to relate consciousness here again if you want to be scientific. The only thing that you are actually testing for real is whether consciousness is necessary for acting intelligently. This is a completely different question that has no relation to consciousness.


Logged

michaelplzno


Not sure why we are drilling into semantics here, but you seem interested in that so whatever.

In my explanation, a religious belief is based on arbitrary assumptions. A sound belief is based on strong indication and profound experience. So not every belief is religious.

In ontology, we can classify beliefs into categories like irrational, rational, and based on evidence. I believe 2+2=4 but you wouldn’t say that. You would say I *know* that 2+2=4. That is both that I believe in it, and there is a theoretical backing that explains the veracity of the claim.

In my view, belief is still the domain of religion in that some people don’t believe 2+2=4 for irrational reasons no matter how much you explain the theory. People’s belief structures are based on some kind of “magic” rather than factual empiricism.

There is still no connection because these are two orthogonal statements.

Even in geometry, two orthogonal lines share at least one point (in most cases). Similarly, both statements are about consciousness, so they are at least thematically connected.

1) My car is red.
2) My car is a great car.

These statements are orthogonal, but anyone who knows basic logic knows that we can infer a third premise that

3) There are great cars that are red.

You can say “well the third statement is orthogonal.” I guess? I’m not sure what orthogonal means here.

It is important to not get confused here.

Not sure why any of this is important at all. Even the most popular threads ever here, like Minecraft’s announcement thread … are they “important” would you say?

It only means there is no repeating pattern

A computer may loop forever without epeating a pattern.

No need to relate consciousness here again if you want to be scientific.

My entire point is that consciousness should be more scientifically scrutinized. When we get into beliefs, there are the 2+2=4 beliefs (scientific) and then there are beliefs like “There are aliens who live on Alpha Centauri who want me to do my taxes”

2 + 2 = 4 is something we can check through empiricism, math, and just general scientific knowledge. The tax aliens is something we cannot really check or know for certain because there is no way currently to see the Alpha Centauri star system’s planets, or to know its inhabitants’ wishes Re: Taxes. So the tax aliens is a religious thing.

I’m trying to say that consciousness shouldn’t be the domain of religion, that empirical tests, like the Turing Test should be able to analyze it better and give us more 2 + 2 = 4 (emperical) kind of data on the concept rather than more religious space tax alien kind of belief.


Logged

michaelplzno


Only by external influence or infinite resources. But this does not invalidate my statement.

No, the way computers work, as in a Turing Machine, it will either run forever or halt. This is a theoretical question like if 2 + 2 = 4 or 2 + 2 = 5, there is an answer, you do not need infinite resources or external factors to loop forever while not repeating a pattern.

For example, the following code:

for(int i = 1; i > 0; i++)
{
  for(int j = 0; j < i; j++)
  {
    print(“A”);
  }
  print(NEWLINE);
}

Will Print

A
AA
AAA
AAAA
AAAAA
AAAAAA

We know this will loop forever, and we know that it never repeats a line as each line is longer than the last. We can know this without running the code or even compiling it.

orthogonal

Not really sure I understand how you are using this here: If you are trying to create some kind of semantic difference between intelligence and consciousness I’m sort of only half following it. Being intelligent and being conscious are totally different things? Like a dog is conscious but not intelligent, and a genius human indie game dev is conscious and intelligent, but AI is intelligent but not conscious or something? Not sure how this invalidates anything I’ve said either.

Arbitrary Assumption

Fine, though I wouldn’t belittle such beliefs in the way I wouldn’t belittle “Imaginary Numbers” in math. They can provide real solutions when plugged into different formulas.


Logged

michaelplzno


The essence of orthogonality is that the state of one dimension or statement does not determine or require the state of another. For instance, there can be great cars that are red, but they aren’t necessarily red. Similarly, while conscious, intelligent beings exist, consciousness is not necessarily a prerequisite for intelligence.

It sounds like you are describing an “independence assumption,” that the logical axioms are independent of one another. I’ve never heard what you are describing as “orthogonality” which is (in my dictionary) more of a geometric thing about looking on a different spatial axis. Though I was once the talk of my CS department for lousing up some independence assumptions in the probability distributions of game of bingo due to a poorly worded paper I read.

If you are just trying to get to the ephemeral nature of consciousness being independent of intelligence, there is a semantic difference, but you still aren’t offering much in the way of definitions of either conciousness or intelligence, and to say that the Turing Test only measures intelligence and not conciousness is something I don’t agree with, though that may be … arbitrary assumptions? I’m pretty sure have more reasoning than just that though.

IQ tests (Intelligence Quotient) measure intelligence and computers are better suited to such pattern tests, the whole point of the Turing Test is not to measure Intelligence but Conciousness, that is, can the test subject pass as an actual human. It’s not a perfect test, but I use it as an example of something that is designed to measure conciousness rather than intelligence. Perhaps in your infinite wisdom you could design a better conciousness test, jsnake.

imagine a real computer with finite resources

Eventually a real computer will run out of memory, yes, though the amount of complexity one can reach with a modern machine is astronomical:

Even with such a finite model of computation with limited memory instead of an infinite tape, there would be incomputable things… I think … (I have not written a proof) that the question of if a computer of such a design will loop, reach the same state twice, or terminate, end computation or run out of memory, is also not computable.

As an attempt at proof, lets say ORACLE(X) takes a piece of code and tells if it will loop or terminate, then you could run ORACLE(X) on your own code and if it says to loop just terminate, otherwise, just start a short loop that never terminates. This program will do the oposite of what it is supposed to. This assumes that ORACLE can also run with the limited memory model as well? It gets fuzzy there.


Logged



Source link

Leave a Comment