?

Log in

Napkin Philosophy [entries|archive|friends|userinfo]
Napkin Philosophy

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

A note on context [Apr. 9th, 2006|02:01 am]
Napkin Philosophy

xiota
[Tags|]

The very thing that defines human reasoning as opposed to anything that is easy to program is context

Context is defined by three things.

1. An environment - specifically, interacting to stimuli in the environment and having it change how you reason
2. Identity - an idea of who you are (something a computer generally doesn't know - instead, they often have "purpose")
3. Communication - in other words, other agents that you can talk to (which is like environment, but more specific)

Context in intelligence is essential for any model of human intelligence as well as any artificial intelligence approach that hopes to be successful - at least in a full/strong AI sense (more than amazon knowing what kinds of things you like).
linkpost comment

Proposal: The singularity [Apr. 9th, 2006|11:12 pm]
Napkin Philosophy
xiota
[Tags|, ]

In The Age of Spiritual Machines, author Ray Kurzweil argues that at some point in the near future (a decade or two from now, more or less, at his guess), technological advance will increase to a point at which humanity will create intelligence surpassing himself and, since it's a "singularity", events will cease to be predictable in the ways they are today (an idea originally put forth by mathematician Vernor Vinge).

What will this mean for, say, physics? Well, if technology continues to advance at an exponential rate, then why is it absurd to suggest that mankind will affect larger bodies in the solar system as astrophysics is thought to? Could we build solar systems in the future?

Our effect on biology is already rather clear - technological evolution is an extension of biological, as far as we can tell.

What other areas will the "singularity" cause to cease to be predictable in classical senses?
linkpost comment

Some administrative work [Apr. 9th, 2006|10:04 pm]
Napkin Philosophy
xiota
[Tags|]

The Secret is Out
I thought I'd note that the requirement of new posts being made friends-only has been lifted. The attraction of a "secret club" has not played out well for this community and, as such, I have decided to make it more inclusive.

Purpose
Now, I'm not one to demand any amount of teleology in any situation, but I think it may be necessary for a Livejournal community. So I have decided to outline a specific purpose for this community - up for review by any commenting member, if anyone even reads this anymore.

The purpose of this community will henceforth be considered to be thought provocation and formalization (not necessarily "philosophy").

We are not setting out to solve all of the worlds problems, or to necessarily even say anything all that profound. We are here to think about things, formalize these thoughts, possibly form theories and models, and just to discuss.

Worst case scenario? I'll use this community to collect my own thoughts surrounding things I read or hear about.

Info
Also, notice the new and more aesthetically pleasing community profile, complete with abridged background story to the name of the community ;).
linkpost comment

Reasoning: Artificial Intelligence [Mar. 17th, 2006|11:10 am]
Napkin Philosophy
xiota
[Tags|]

Continuing from the proposal post about the development of reasoning, I'd like to relate this back to AI.

If we are to model human intelligence or reasoning in a computer, we'd want to cover the two basic areas: memory and reasoning.

What this amounts to is that an intelligent machine must:

1. Be able to both learn and use knowledge ("memory")
2. Be able to reason with logic with or witout memory ("reasoning")

But wait, there's more:

3. Be able to connect on an emotional or social level

This last one is only for "robots" that we wish to model humans as a whole - not for AI applications that are utilitarian (e.g., using AI for database knowledge discovery).

So 1 and 2 are the essential aspects of AI - and, lo and behold, that is what research in AI is focusing on: machine learning (1), and reasoning (with uncertainy, causal reasoning, etc) (2).

This is just to clarify the framework a little bit, I guess.
linkpost comment

The Chip: Neural Implants [Mar. 17th, 2006|10:59 am]
Napkin Philosophy
xiota
[Tags|]

I've been going through The Age of Spiritual Machines (recommended by Norm) lately, and some parts of it have made me think about the Chip.

First of all, Kurzweil says that neural implants are in the not-so-distant future. That is, electrical neurons (that are apparently 1 million times faster than biological neurons). These neurons use AI research knowledge known as neural nets for pattern detection and output, for each neuron, very simple results - but when put together in a extremely parallel network, produce exactly what our brain produces.

Somehow, the book says these neural implants could function in just about the same way as the Chip, but on a nano-scale. That, actually, seems like a lot better of an idea than a big old integrated circuit attached next to the brain: make it the brain itself.

The end of the Chip?
linkpost comment

Proposal: The foundation of reason [Mar. 5th, 2006|12:42 am]
Napkin Philosophy
xiota
[Tags|, ]

I've recently been thinking about the evolution of humans and the development of our ability to reason.

Ryan and I have talked about this in the past, and, to his credit, thought of the evolution of the uses for the frontal lobe (and the cerebrum in general). The other day, in philosophy club, a fellow named Matthew and myself discussed this issue further and came to a few conclusions.

The development of memory and reasoning

What evolutionary purpose did memory serve originally? We speculated that memory allowed primitive man access to his past - at will, even. This was a huge progression from the animal brains that could only be conditioned and had very simple past-data, such as basic recognition.

Now, it is important to note that the development of memory had to precede the development of the ability to reason. This is because reason cannot function without having data to refer to in the first place.

Basically, reason affords us the power to predict the future. We can use deduction and induction (and abduction: reasoning to the best explanation) due to the very fact that we can both.

1. Remember past events
2. Discover patterns and regularities

We use the knowledge of the sun always coming up to inductively conclude that it will, in fact, come up tomorrow. We use our knowledge of taxonomy to deduce that whales and kittens are both mammals. We utilize our memory to conclude what is most likely the cause of some event.

The basic idea is, to reiterate, that memory came first, then the ability to reason.

Why did these even develop at all, though?

The evolutionary benefit of reasoning

The purpose for these developments is, naturally, hunting!

The power to remember what the elk did last time you were hunting it, coupled with the ability to assume this knowledge will apply in future cases, afforded primitive man a HUGE advantage over his prey.

The rest is history.

The application of primitive reasoning for modern humans

What do we use memory and reasoning for these days? Science, engineering, philosophy, art, etc.

Is it just a coincidence that our hunting advantage now applies to much more abstract ideas and concepts?

I think not. Perhaps our ability to predict the weather, solve differential equations and program in Java are nothing more than a figurative "hunt."

Does this mean that there must be some kind of "prey" that we're after? Or is it just that we would gain pride and superiority from a successful hunt and, therefore, we now try to get these same things from our daily "modern" hunting?

Future work

This is a napkin philosophy proposal because I wish to develop this theory. The future work for this is:

1. Formalize it using either mathematical modeling, logic, or some other format
2. Correct errors to insure that it is, in fact, correct - and not missing any important details
3. The role of primitive women and how that applies today
link1 comment|post comment

Proposal: Research [Feb. 14th, 2006|11:14 pm]
Napkin Philosophy
xiota
[Tags|, ]
[music |Gorillaz - Kids With Guns]

Greetings inactive MTU napkin philosophers. I've been thinking (oh the drama) a lot lately.

You know what we should do? What is the "highest level" of "napkin phliosophizing"? Research, of course!

Universities do research. We go to a university. Wouldn't it be ideal if one (or more!) professors were willing to support/advise our "napkin philosophizing" efforts - in the lab?!

Idea 1: Big Research Team

So, here's my idea. When is research done best? When you have a team! I don't think we could very easily set up a research center for our purposes or anything, but I think it may be possible for the more "theoretically inclined" of us Techies to set up some kind of research team consisting of people we know (within reasonable numerical limits, of course).

We'd need the support of at least one professor, but the more the better. Depending on what kind of project we'd decide on, the number and department of professors willing to support our efforts would change (interdepartmental professor support... ::drools::).

Now, this may be just dreaming, but it sounds like a good idea at the moment. It would just be so great to collaborate with a *larger number* of people that I know and are interested in research. Power in numbers, right?

And I won't lie: I have a fetish with group organization to achieve common goals.

Idea 2: Normal Research

It is not inconceivable that a professor would be willing to support a smaller team on a topic that they are more familiar with - e.g., Nilifur and artificial intelligence, Steve Carr and compilers, Pastel and HCI, etc. This is your more common version of undergraduate researching and is not bad, but not as attractive as idea 1.

--

I think either would be fun, to be honest, and a bonding experience (the crowd groans) for those of us planning to graduate next year.

Does anyone have any interest in either?
linkpost comment

The Chip: Remote Login [Jan. 3rd, 2006|02:51 pm]
Napkin Philosophy
xiota
[Tags|]

I was thinking a while back about what would be fun with the Chip: remote login. You could login to another host (i.e., body) and use its resources - input and output devices.

That is, if you have the permissions, you could share the input devices: the five senses, basically. So, you could see things through another person.

Also, if you have permissions, you could use the output devices - that is, send signals to the person's arms and so on. So, with the input and output devices, you could control the person entirely - a la Ghost in the Shell, basically.

Of course, like computers nowadays, you would have to have security - both permissions and encryption on the login (e.g., ssh).

Any other things to consider for this very super duper awesome feature of the Chip?
link4 comments|post comment

Love Theory: Response to the collapsing of interactions and traits [Dec. 10th, 2005|03:58 am]
Napkin Philosophy
xiota
[Tags|]

Proposition: both interactions (iota) and classical traits (tau) are, really, both traits

Translation: one loves (or doesn't love) someone because of how they view/value/perceive the other person's actions and semi-static traits

Counter-example: ones love can be swayed by the interactions "working" or "not working", and thus makes interactions inherently distinct from traits

QED
link6 comments|post comment

Proposal: "Modeling" - the roles of intuition and logic in creating models [Dec. 3rd, 2005|08:12 pm]
Napkin Philosophy
xiota
[Tags|, ]

There are some people who would prefer to shrug off logic as some elitist method used to make others feel stupid. These people would prefer to use their feelings to verify the truth of something.

Hopefully people in this community realize the problem with this. If not, we can discuss further if you'd like.

What is important to note, though, is that emotion - or, philosophically speaking, intuition - has its place in the thinking that we're doing here, modeling. Let's be truthful: when we come up with an idea, the idea just comes into our heads sometimes. We don't know why, but some connection was made such that we thought of this new idea.

The ideal of always having our brains use logic to come up with new ideas is not necessarily just unrealistic, it's probably dangerous, since creativity could be easily abolished if that was the case.

So, we can assume that intuition is necessary for the creation of new concepts and models. What about verification of them? Classically, logic is the tool used for verification of models - the consistency of the model being the chief property that is tested. Another popular method of verifying a model is how well its application works, given a set of instances of this being done. This is basically "experimenting" with the model and thus uses the logic of induction.

That's all and good, but what about models that cannot be experimented on (but are nonetheless internally consistent - i.e. have no self-contradictions)? A good example of this kind of model is a theory of consequentialist ethics. Since this kind of ethical system is, actually, impossible to implement (since no one is psychic), the only way to verify its "correctness" is intuition!

That is, consequentialism, at first glance, seems like a good idea. That's intuition talking.

The question, then, is whether or not intuition is a valid method for verification of models. I suggest that it is not, since any model or theory that requires it for verification is probably flawed (such as in the case of consequentialism).
link7 comments|post comment

navigation
[ viewing | most recent entries ]
[ go | earlier ]