Monday, November 12, 2012

Is Vitalism an alternative explanation to Intelligent Design?

Dr. Nagel's claims that while ID effectively shows the inadequacy of chance and necessity to explain biological organisms and processes, intelligent agency is not the only alternative and is less preferable to a more unitive principle.  Dr. Barham extends this argument with a few examples based around the more ancient notion of vitalism.

http://www.thebestschools.org/bestschoolsblog/2012/11/12/nagel-dembski-life-mind/

Vitalism does seem to be one implication of ID based science, as shown by Dr. Wells and Dr. Sternberg.  For example, in the case of fetal development, claiming that the entire developmental process is the product of God's continuous direct intervention seems equivalent to saying the same about the operation of all physical processes - since such an intervention can explain everything it would explain nothing.  Furthermore, since the fetus is not conscious for a significant portion of development, the development can't be attributed to the fetus' conscious activity.

Additionally, Dr. Sternberg has shown the information necessary for the fetal development cannot be contained within the original sperm and egg.  Therefore, some external source of developmental information is necessary.  I'm not sure if Dr. Sternberg has ruled out the fetus' environment, but my impression is he did not think the physical matter involved in the fetus' development could account for the necessary developmental information.

So, if neither fetus nor environment can account for its development there must be a non-physical source of information that develops the fetus, which begins to look a lot like Aristotlian vitalism, and does not need direct intelligent agency to explain its operation.

However, this does not solve the information problem, it merely pushes the problem up a level, and Dr. Dembski's design inference argument is just as applicable.  Again, the question must be asked where does the information of the non-physical vital process come from?  What is needed is an information creator.

Wednesday, September 5, 2012

Materialism and human rights


Recently an MIT researcher argued robots may gain legal rights:

The arguments for why this could and should be done revolve around anthropomorphism and proper conduct.  Such arguments do not in fact support robots having rights, but are more similar to laws we have now against media that portray, and allow people to engage in certain acts that would be atrocious if real.  In this case, we are not giving rights to fictional characters by outlawing such products.

However, from a materialistic point of view, extending rights to robots does make sense.  Within materialism, humans are essentially very complex robots, and by giving rights to humans we are already giving rights to robots.  There is little difference giving rights to robots produced by evolution and giving rights to robots produced by these robots.

Rights do not stop at robots, though.  A robot is essentially software hooked up to a set of actuators and sensors.  Software can be embedded in many different media, besides silicon and circuitry.  For example, rocks in a desert can be a computer: http://xkcd.com/505/

Consequently, robotic software can be embedded in this rock computer, turning the desert into a robot, and thus conferring rights to the desert.

In short, pretty much any physical object can end up getting rights within materialism.  So next time your computer locks up and you start beating it with your keyboard, careful or it might take you to court!

The question remains, if materialism is not the answer for giving a robust and coherent account of human rights, what can?  Well, Intelligent Design points us in the right direction:
http://appliedintelligentdesign.blogspot.com/2011/10/human-dignity.html

Wednesday, August 22, 2012

What does the Bible mean by "Natural"?

Some think the biblical language about homosexuality being unnatural is merely referring to cultural norms. However, that is not the case:

The ancient worldview is that there is a moral order to reality, which is tied to functionality. So, the reason why male/female sex is natural is because the function of sex is procreation. This may be a cultural belief in the sense that a certain culture held this belief, but it is not cultural in that the assumptions are purposefully based on societal norms. The assumptions are based on primitive observations of the world.

Today we, especially the highly educated, don't hold these assumptions: A) there is inherent functionality in nature, B) this inherent functionality prescribes a moral order, and C) following this moral order is key to flourishing and happiness as human beings. It is we who have had to be educated that our primitive observations are wrong, so technically speaking our views are the more 
culturally conditioned.

You can notice this in every day language, such as how we use the terms right and wrong and natural and unnatural - they all tend to be tied to the notion of a moral order and natural function. For example, you probably don't consider it too strange to say that eyes are for seeing, the mouth is for speaking and eating, the stomach is for digesting food, etc. But these are all instances of a primitive notion of functionality, which we have been taught is wrong.

However, none of this is to say the primitive view is right and our modern view is wrong. It is to show that people in Paul's day did not think they were referring to cultural norms when saying homosexuality was unnnatural. What they are saying is homosexuality goes against the natural purpose of sex and is destructive to humankind as a whole because it reduces the likelihood that the society will survive (i.e. it contributes to reducing the birthrate below sustainable levels, see Europe). For example, Plato gives the death penalty for homosexuality and masturbation in his Laws dialogue for precisely this reason. Christians went further and claimed the natural moral order was ordained by God Himself, and those who engaged in unnatural behavior were going against God, not merely going against nature.





Intelligent Design is one scientific technique for identifying instances of functionality in nature, the same functionality that underlies the traditional moral order.

Sunday, August 19, 2012

Why environmentalism needs Intelligent Design, and Intelligent Design implies environmentalism

The big implication of ID is that human beings, as intelligent agents, are completely unique out of all the other created things on earth.  They are the only beings exhibiting the ability to create CSI.

if can infer that CSI makes nature run (it seems to consist of many very complex systems held together by precise functional specifications), and furthermore than the 2nd law of thermodynamics leads to the breakdown of CSI, then CSI must be maintained by some intelligent agent to keep nature running.

A Biblical example is the injunction in Genesis for man to cultivate creation.  There are also a number of historical examples showing man must continue to cultivate creation or it falls apart.  For example, archeologists believe parts of the rainforest were grown by Indians and the soil construction was specially designed to replenish itself. Another example is the great plains in America.  Archeologists believe the plains were purposefully designed by the Indians as grazing grounds, and once the Indians disappeared from the land the animals began to reproduce out of control, which gave rise to buffalo stampedes.  Finally, the reason why there are so many forest fires in Californian forests is because the Indians used to conduct controlled burns to keep the fire fuel from building up.  However, current environmentalist push a non-intervention approach to maintaining the forest and discourage controlled burns, hence the greater number of forest fires.

Anyways, I see ID having two general implications.  First, contra modern environmentalists, humans have an extremely important role in the well being of nature, and if humans were eliminated, as some environmental extremists want, then nature would likely collapse.  However, this also means that we cannot just subvert nature to our own ends.  We must understand the functional information stored in nature before changing its functionality.  Our technology will contain much more CSI if it is built in line with nature's master plan than if built contrary to natures plan.

Saturday, August 11, 2012

Rome's true relationship to ID and evolution

Rome holds a tentative position regarding an old earth, common ancestry (through not a continuum between humans and animals), etc. However, the popes (JP II and Pius XII) have been very clear in their denunciation of Darwinistic/secular ideas: against the "non-overlapping magisteria", against a continuum between humans and animals, and positively stating there will be empirical signs of man's spiritual nature (ID is a subcategory of this concept).

The claim that Roman Catholicism is contrary to ID and embraces all aspects of evolutionary theory, especially Darwinism, is secular propaganda; which, unfortunately, appears to have been widely accepted by both lay Catholics and the Catholic intelligentsia.  This claim, however, is clearly false if one takes the time to read the papal encyclicals on the topic.


"Truth Cannot Contradict Truth", Pope John Paul II:
http://www.newadvent.org/library/docs_jp02tc.htm

"In his encyclical Humani Generis (1950), my predecessor Pius XII had already stated that there was no opposition between evolution and the doctrine of the faith about man and his vocation, on condition that one did not lose sight of several indisputable points."

"The Church's magisterium is directly concerned with the question of evolution, for it involves the conception of man: Revelation teaches us that he was created in the image and likeness of God (cf. Gn 1:27-29)."

"...theories of evolution which, in accordance with the philosophies inspiring them, consider the spirit as emerging from the forces of living matter or as a mere epiphenomenon of this matter, are incompatible with the truth about man."

"The moment of transition to the spiritual cannot be the object of this kind of observation, which nevertheless can discover at the experimental level a series of very valuable signs indicating what is specific to the human being."

Wednesday, August 8, 2012

Intelligent Design is a HORRIBLE apologetic!

johnnyb at UD makes the pertinent point that ID is not an apologetic, and should not be critiqued as if it were.

http://www.uncommondescent.com/intelligent-design/ed-feser-and-intelligent-design-pt-1-id-is-not-an-apologetic/#comment-429565

I agree.  In fact, I would go so far as to argue that ID is consistent with atheism!  What kind of apologetic for God's existence is also consistent with God's non-existence?  A horrible one, that's for sure!

http://appliedintelligentdesign.blogspot.com/2012/07/intelligent-design-and-atheism-are.html

Some think ID still gets us part of the way there by ruling out materialism.  Well, ID kind of rules out materialism, at least as it is construed today.  Namely, ID rules out non-intelligent matter as an explanation for intelligence.  Intelligence cannot arise from non-intelligence.  However, intelligent matter is a coherent concept.  Intelligent matter is not a completely outlandish idea.  Intelligent matter is a crucial principle for Mormonism, as an example.

So, if ID doesn't rule out atheism, and it doesn't rule out materialism, what does ID do?  Is anything and everything consistent with ID?  Is ID a completely vapid concept?  Heh, I can hear the heads nodding from the Darwinist camp!

ID does do something very, very important.  Personally, I consider this "something" to be much more important than any apologetic or ideological argument, because it undergirds the rationality of such arguments.  Furthermore, many bad ideas are quite consistent with many apologetics.  The importance of ID partially lies in its ability to rule out these bad ideas.  But that is just an accidental benefit to ID, just as ID can accidentally serve as a premise for an apologetic argument.

No, there is something much more important about ID.

This "something" is what Applied Intelligent Design is about.

Monday, August 6, 2012

How ID sheds light on the classic free will dilemma

Copied over from my original posting at Uncommon Descent.

http://www.uncommondescent.com/philosophy/how-id-sheds-light-on-the-classic-free-will-dilemma/


The standard argument against free will is that it is incoherent.  It claims that a free agent must either be determined or non-determined.  If the free agent is determined, then it cannot be responsible for its choices.  On the other hand, if it is non-determined, then its choices are random and uncontrolled.  Neither case preserves the notion of responsibility that proponents of free will wish to maintain.  Thus, since there is no sensible way to define free will, it is incoherent. [1]
Note that this is not really an argument against free will, but merely an argument that we cannot talk about free will.  So, if someone were to produce another way of talking about free will the argument is satisfied.
Does ID help us in this case?  It appears so.  If we relabel “determinism” and “non-determinism” as “necessity” and “chance”, ID shows us that there is a third way we might talk about free will.
In the universe of ID there are more causal agents than the duo of necessity and chance.  There is also intelligent causality.  Dr. Dembski demonstrates this through his notion of the explanatory filter.  While the tractability of the explanatory filter may be up for debate, it is clear that the filter is a coherent concept.  The very fact that there is debate over whether it can be applied in a tractable manner means the filter is well defined enough to be debated.
The explanatory filter consists of a three stage process to detect design in an event.  First, necessity must be eliminated as a causal explanation.  This means the event cannot have been the precisely determined outcome of a prior state.  Second, chance must be eliminated.  As such, the event must be very unlikely to have occurred, such that it isn’t possible to have queried half or more of the event space with the number of queries available.
At this point, it may appear we’ve arrived at our needed third way, and quite easily at that.  We merely must deny that an event is caused by chance or necessity.  However, things are not so simple.  The problem is that these criteria do not specify an event.  If an event does meet these criteria, then the unfortunate implication is so does every other event in the event space.  In the end the criteria become a distinction without a difference, and we are thrust right back into the original dilemma.  Removing chance and necessity merely gives us improbability (P < 0.5), also called “complexity” in ID parlance.
What we need is a third criteria, called specificity.  This criteria can be thought of as a sort of compression, it describes the event in simpler terms.  One example is a STOP sign.  The basic material of the sign is a set of particles in a configuration.  To describe the sign in terms of the configuration is a very arduous and lengthy task, essentially a list of each particle’s type and position.  However, we can describe the sign in a much simpler manner by providing a computer, which knows how to compose particles into a sign according to a pattern language, with the instructions to write the word STOP on a sign.
According to a concept called Kolmogrov Complexity [2], such machines and instructions form a compression of the event, and thus specify a subset of the event space in an objective manner.  This solves the previous problem where no events were specified.  Now, only a small set of events are specified.  While KC is not a necessary component of Dr. Dembski’s explanatory filter, it can be considered a sufficient criteria for specificity.
With this third criteria of specificity, we now have a distinction that makes a difference.  Namely, it shows we still have something even after removing chance and necessity: we have complex specified information (CSI).  CSI has two properties that make it useful for the free will debate.  First, it is a definition of an event that is neither caused by necessity or chance.  As such, it is not susceptible to the original dilemma.  Furthermore, it provides a subtle and helpful distinction for the argument.  CSI does not avoid the distinction between determinism and non-determinism.  It still falls within the non-determinism branch.  However, CSI shows that randomness is not an exhaustive description of non-determinism.  Instead, the non-determinism branch further splits into a randomness branch and a CSI branch.
The second advantage of CSI is that it is a coherent concept defined with mathematical precision.  And, with a coherently definition, the original argument vanishes.  As pointed out in the beginning of the article, the classic argument against free will is not an argument against something.  It is merely an argument that we cannot talk about something because we do not possess sufficient language.  Properly understood, the classical argument is more of a question, asking what is the correct terminology.  But, with the advent of CSI we now have at least one answer to the classical question about free will.
So, how can we coherently talk about a responsible free will if we can only say it is either determined and necessary, or non-determined and potentially random?  One precise answer is that CSI describes an entity that is both non-determined while at the same time non-random.
——————-
[1] A rundown of many different forms of this argument is located here:http://www.informationphilosopher.com/freedom/standard_argument.html
[2] http://en.wikipedia.org/wiki/Kolmogorov_complexity

Sunday, August 5, 2012

Intelligent Design is incompatible with certain forms of determinism

I've addressed elsewhere whether deterministic systems, such as Neoplatonism, are consistent with ID.  However, in that article, the assumption is the "necessity" that is eliminated from our causal explanations is unqualified.  I've been assuming so far that the CSI metric is meant to eliminate all forms of necessity, whether they be natural causes, aliens, deities, etc.

However, a reader made the astute point that Dr. Dembski does not actually characterize necessity in such a broad manner, and that I am interpreting his work in a more general manner than it may be intended.  In general, when Dr. Dembski talks of eliminating necessity, he is referring to natural causes, such as Darwinian evolution.

So, let's examine how well the concept of CSI works if the term "necessity" is restricted to refer to only certain forms of necessity, i.e. natural causes, and other forms of necessity are still fair game.  Specifically, let's examine what happens if intelligent agents are necessary causes that necessarily cause CSI.  P(CSI | Intelligent agent) ~ 1, or in other words the probability of an intelligent agent creating CSI  is very close to 1.

First, notice that such a qualification is not necessary for CSI to be an indication of intelligent activity.  It may be that P(CSI | Intelligent agent) ~ 0, and intelligent agents (IA) only create CSI in very, very rare circumstances.  If intelligent agents are the only beings capable of creating CSI, then the detection of CSI would still indicate the activity of an intelligent agent.  Therefore, even if P(CSI | IA) ~ 0, it is still the case that CSI functions as a design detector, and P(CSI | IA) ~ 1 is not a necessary condition for design detection to work.

Second, consider the conditions that CSI will be used for design detection.  The premise behind using CSI is we do not know whether an intelligent cause has been at work in our given scenario.  Consequently, we do not know whether a particular cause under consideration is a natural or intelligent cause.  Thus, we must take all the causes into account when calculating the CSI for a particular event.

Now, let's say the event does have CSI, and it was created by an intelligent cause.  Furthermore, the intelligent cause has the condition such that P(CSI | IA) ~ 1.  P(CSI | IA) ~ 1 means that the probability of a CSI event occurring is 1 / specification resources.  Therefore, the probability of the event under question occurring is 1 / specification resources.

The CSI formula is -log2 (P(E) * I(E) * PR).  P(E) is the probability of the event occurring.  I(E) is the specification resources available for specifying the event.  PR is the probabilistic resources.  We know from previous considerations that P(E) = 1 / I(E).  This makes the formula now look like this: -log2 (1 * PR) = -log2 (PR).  Since PR >= 1, and -log2 of any positive integer is <= 0, then CSI will always be <= 0.  Consequently, if P(CSI | IA) ~ 1 and an intelligent agent is responsible for the event, the CSI calculation will never register positive, and can never detect design.  Therefore, the condition P(CSI | IA) ~ 1 renders CSI an ineffective metric for detecting design.

The result from these considerations is that claiming intelligent design is compatible with determinism does not bode well for being able to actually detect intelligent design.  It is in the interest of intelligent design to rely on non-deterministic metaphysics in order to remain logically coherent.  One such non-deterministic metaphysic is libertarian free will, which attempts to be both non-necessary and non-random.  Such a metaphysic is quite compatible with Intelligent Design:

http://appliedintelligentdesign.blogspot.com/2012/08/how-id-sheds-light-on-classic-free-will.html

Saturday, August 4, 2012

Intelligent Design makes stock market predictions

The theory behind Intelligent Design is precisely defined enough for a conference to demarcate Intelligent Design:

http://appliedintelligentdesign.blogspot.com/2012/07/ideal-intelligent-design-conference.html

As such, Intelligent Design also makes predictions that apply to the stock market.  Consequently, for an individual motivated to do the research, as I will at some point here, it is possible for ID to put money behind its claims.

For example, Dr. Sternberg and Dr. Wells have shown that the genome only contains a very small amount of the information that creates biological organisms:

http://www.richardsternberg.com/biography.php

Consequently, companies predicated on being able to understand and modify any area of human physiology through sequencing the genome will not do well.  Of course, companies that focus on only very specific areas of human physiology can bring value to the market and make a profit.  But, to use a programming analogy, the only aspect of human physiology these companies will be able to manipulate are those that vary like function parameters, such as eye color, hair color, physical attributes that can increase or decrease within a range, etc.  Wholesale restructuring of the body is out of the question.


Another prediction is any company predicated on strong artificial intelligence will ultimately fail, as long as it remains true to its principles.  Jonathan Bartlett demonstrates why artificial intelligence is incompatible with Intelligent Design theory here:

http://www.blythinstitute.org/site/sections/37

Such companies will be successful with narrow scope applications of weak AI, similar to the genome sequencing companies.  But wholesale replication of human intelligence is also out of the question.  So, for example, the Intelligence Singularity theory of Ray Kurzweil will turn out to be bunk, insofar as the theory is necessarily contingent upon strong AI.

http://www.kurzweilai.net


A positive prediction of Intelligent Design is that a companies predicated on using information technology to better capitalize on the unique attributes of human intelligence will be very successful.  Google search is one good example.  Foldit is another good example.

http://news.cnet.com/8301-27083_3-20108365-247/foldit-game-leads-to-aids-research-breakthrough/


What other implications of ID can be tested on the stock market?

As in the case of the Ideal Intelligent Design Conference, these implications must be unambiguously tied to ID, and only ID.

Tuesday, July 31, 2012

Ideal Intelligent Design Conference

johnnyb asked over at UD to make proposals for a mid-level ID conference. Here is my proposal: 

1. Practical, useful applications of ID to hard domains, and hard aspects of other domains: engineering, science, mathematics, economics, politics, psychology. The use of ID must depend unequivocally on ID theory, there must be no way to account for the application other than within an ID paradigm, as formally defined by Dembski's complex specified information. Even better if it is derived from CSI.  Examples: CSIC and stock market predictions.

The emphasis here is something that will make a lot of money/wealth, and unequivocally based on/derived from ID.  This topic should be the primary focus of the conference.  Bonus points if it can be shown ID is the best paradigm for making a lot of money/wealth.

2. New areas of intellectual investigation, new kinds of concepts that ID implies.  Includes solving well defined, hard, unsolved problems, such as problems with epistemology and inductivism in philosophy. Again, previous qualification applies, cannot be accounted for, derived from any other sort of theory.  Examples: linking epistemology to ontology and CSI extracted from natural sciences.

3. Metaphysical, philosophical foundations/implications of ID. I.e. libertarian free will, tensed theory of time (in reference to W. Craig's distinction between A/B theories of time), real contingency, synthetic truths, substance dualism? etc. Bonus points for showing ID is compatible with concepts that are traditionally considered anti-thetical to ID, such as atheism, physicalism, determinism etc.  Examples: atheism and supernaturalism.

4. Debunking faux Intelligent Design concepts. Consists of showing a supposedly ID concept can be accounted for within a paradigm antithetical or ambivalent to ID.  Example: Neoplatonism.

5. Showing existing concepts not already connected to Intelligent Design can only be accounted for within ID.  Examples: capitalism and SETI.

6. Experiments, proposals for practical experiments to falsify/verify ID.  Emphasis will be on experiments that either already have results, or show great promise of generating results within 1 year.  Example: Stylus and CSIC.

I am not interested in using ID as a "metaphor" for concepts that aren't unambiguously implicated by ID. For example, hard-Calvinism would not be ID, even though it would qualify as Creationism. So, in general, the conference would accept anything that can be shown to unambiguously implicate ID, and nothing antithetical nor ambivalent to ID.

This conference is the only way for ID to make true intellectual progress.  It is the only kind of ID conference I am interested in.

Intelligent Design Links Epistemology to Ontology (Part 2)

Part 1 pointed out that while Dr. Searle's Chinese Room Argument may successfully preserve a non-mechanical intuition of intelligence, it does so at the cost of eliminating our ability to detect other intelligences.  The argument saves our mind by beheading us, to put the problem figuratively.

Intelligent Design provides a way for us to sew our heads back onto our bodies.  While the Chinese room shows there is a logical difference between syntax and semantics, there may be more to the story. While semantics cannot be embedded within syntax, syntax may embed the signs of semantics.  In other words, certain syntactical sentences may possess properties that show they contain meaning, even if the signs do not display what the meaning is that is contained.

As an analogy, consider buildings and occupants.  Many different types of occupants can be contained by many different types of buildings, and the building exterior in itself may not tell you anything about its occupants.  This is why we have signs.  The signs tell us that something of interest is contained within the building, even if we might not understand what that something is.

ID makes this very same claim about syntax.  Certain syntactical configurations exhibit a property known as complex specified information.  This property is only exhibited when the configuration is the product of an intelligent agent.  This property cannot tell us, by itself, whether the configuration possesses meaning.  However, since we know only intelligent agents create meaning, it tells us that the configuration may possess meaning.  While we might interpret some product of natural forces to be meaningful, it is not actually intentionally communicating meaning to us.  The communication of meaning is only something that intelligent agents do.

This is how ID sews our heads back on.  Even though it does not give us access to the semantics, it at least is a definitive signpost telling us that minds other than our own exist, and based on introspection, we know that these other minds are capable of and possibly interested in communicating.  Thus, when we perceive meaning in an identified product of intelligent agency, we have good reason to believe that the meaning is real.  And that is how ID allows us to use the exterior syntax of the Chinese room to access interior semantics of the translator's mind.

Saturday, July 21, 2012

Can consciousness be an explanation for ID?

Consciousness is the great mystery of the scientific age.  How can the 3rd person world of science account for the 1st person account of that world?  Try as they might, no philosopher or scientist has come up with a convincing scientific explanation of consciousness.  Whenever they try, they must resort to vague terms, such as "emergence", or even claim it is just an illusion.  If only it were that easy!   Massive unpaid debt?  It is an illusion!  Car in dire need of repair?  It is an illusion!  Roof about cave in?  It is an illusion!  Wall blocking my path?  It is an illusion!  As the child prodigy says in the Matrix, you've got to realize there is no spoon, nor consciousness.

But for us less intellectually flexible folk, we've got to account for the world as it matters for our day to day lives.  Try to ignore the reality of consciousness and our day to day lives become much shorter!

So, let's see if we can get an angle on this problem of consciousness.  What is so tricky about consciousness that it defies scientific explanation?  Well, the most striking aspect of consciousness is its point of view.  We, as conscious beings, peer out into the world.  And, it is a singular point of view.  Unless I happen to be in a mental institute, there is only one me peering out into the world.

This is is an utterly foreign description for the world of science.  Electrons and protons don't gaze out at anything.  They merely bump around, careening around the microscopic world of particle physics.  Neither are physical objects unified wholes, at least as far as physics is concerned.  Whatever level you happen to examine a physical object, it can always be broken up into yet smaller physical objects, until there is nothing left.  And this introduces the unification problem, as described by Angus Menuge in his excellent paper "The Ontological Argument from Reason".

Dr. Menuge's paper shows there are numerous problems with a materialistic description of reason, whereby materialism does not allow for certain properties that are essential for reasoning.  The property I want to focus on here is the unification property.  It is simply this.  Say we have an argument that 1) A implies B, 2) B implies C, therefore 3) A implies C.  This is the standard transitive relation of mathematical systems.  However, a purely material process runs into problems when trying to carry out such a process of reasoning.

To see why, imagine we have three people: Joe, Jack and John.  Joe holds proposition 1 in his mind.  Jack holds proposition 2 in his mind.  Now, to arrive at proposition 3, John must get proposition 1 and 2 and then unify them into proposition 3.  But, if Joe, Jack, and John are all merely material beings, there can never be one being that holds all three propositions at once.  This is because, as we saw previously, material beings are not single things, but merely a collection of many things.  For a material entity to hold one of the propositions, the proposition must be contained within a configuration of matter.  Thus, since each proposition is different, it must consist of its own unique configuration of matter, and for a chunk of matter to hold another proposition, it must assume a completely new configuration.

This means that no single configuration can process multiple propositions and unify them, since the configuration changes with each additional proposition.  Therefore, there is no single entity that can carry out the unification process necessary for reasoning.

Of course, it is easy to write a program, or create some other clever mechanical device to manipulate symbols so at to arrive at proposition 3.  However, this, in essence, is no more reasoning than an animated film of propositions 1 and 2 merging into 3 can be considered reasoning.  It'd be like saying words on a computer screen are reading.

Thus, we see that the nature of consciousness leaves us with problems that are completely insoluble with a materialistic explanation.  There is an inherently simple, unified nature to consciousness that defies the complex, disparate nature of the physical world.

At this point, we are in a position to see how the mysterious nature of consciousness makes it uniquely suited as the mechanism of intelligent design.  As explained in a previous post, contra Dawkin's popular argument, intelligent design does not necessitate the designer be more complex than the design.
http://appliedintelligentdesign.blogspot.com/2012/07/must-designer-be-more-complex-than.html

In fact, the design inference works better if the designer contains less complex specified information (CSI) than the design, otherwise it becomes questionable whether the design contains CSI.  And, if a design does not contain CSI, then it cannot be properly considered a design.  Without a design, the existence of the designer is called into question.  So, the very existence of the designer seems to hang on the fact that the designer is simpler than the design.

Yet, such an account is false for all physical processes.  By the very nature of how physics operates, via chance and necessity, the cause must always be as complex, if not more complex, than the effect.  So, to account for a designer, we need an entity that stands in defiance of the entire physical world by being simpler than its product.

Consciousness looks to be a prime candidate for solving our dilemma.  Out of all known substances, consciousness is the only one we know of that is inherently simple and indivisible, as previously shown in the unification argument.  With all physical substances, there is at least the potential that it can be divided, which is why we seem to keep finding smaller and smaller particles that make up our world.  However, consciousness can never be divided, otherwise it would cease to be consciousness.

Consequently, if consciousness is the designer of designs, it must by definition always be simpler than its design.  And if consciousness is always simpler than its design, then the existence of the design and thus the designer need not be dismissed as an illusion.  The result is that consciousness allows us to explain intelligent design.

Friday, July 20, 2012

Intelligent Design and Christology

Or, what does Jesus mean when he says he is the Truth?

In John 14:6 Jesus identifies himself with the truth.  Is Jesus saying he is a truth, one of many truths?  Is he saying that he is leading us to an ultimate truth?  Or, is Jesus literally saying that he is the ultimate truth?

Interestingly, Intelligent Design seems to shed some light on this question.  While Intelligent Design itself cannot logically imply the existence of God, it is consistent with atheism, after all:
http://appliedintelligentdesign.blogspot.com/2012/07/intelligent-design-and-atheism-are.html
that doesn't preclude Intelligent Design from still painting an interesting picture of what the true nature of reality may be.

To arrive at this picture requires a bit of upfront brushwork.  For this post, I'm assuming the reader is already familiar with the basics of Intelligent Design theory, most importantly the definition of complex specified information (CSI).
http://www.designinference.com/documents/2005.06.Specification.pdf

Furthermore, I'll assume the reader is familiar with the basic implications has for the nature of reality, that there must be at least four different kinds of entities: chance, necessity, CSI, designers.
http://appliedintelligentdesign.blogspot.com/2012/07/intelligent-design-goes-beyond.html

So now that you've swallowed the red pill, it's time to show you how deep this rabbit hole really goes.

Let's revisit our fourfold picture of reality.  Logically, CSI entails that information cannot explain itself, without resulting in a contradiction, so there must be a fourth entity that is neither CSI, chance, nor necessity.  I will call this entity a designer.  That is about as far as mere logic can take us.  Now let us start bringing empirical data into the picture.

From observation, it is clear that there is more than one designer in the world (unless you happen to be a solipist).  But, where did all these designers come from?  It is also clear that we designers cannot create other designers, even though we give birth to new designers.  We have a hand in a process, but as far as we can tell, the only thing we can create is CSI, which is merely a reconfiguration of existing things.  Since designers are beyond CSI, that means we cannot create designers.

Again, where did all these designers come from?  Well, linguistically at least, there is a difference between a designer and a creator.  A designer makes use of existing materials to create CSI.  And that is creation, to an extent.  But, such creation requires already existing substances, so it is not creation proper, as in the actual creation of substances and entities.

Now our picture is developing a hole.  We have many substances and entities, but no known means of bringing such things into existence.  Surely they do not pop into existence arbitrarily either.  We now have a gigantic explanatory gap.  It looks like the logic of ID combined with the empirical data takes us beyond even designers, as we now see a need for a fifth entity.  There must be a fifth entity that can create, and not merely create CSI, but create the very substances that CSI is embodied within, and the very designers that are creating CSI.

Thus, this fifth entity must be a creator, and a creator in the proper sense in that the creator can bring into being substance itself.  What an intriguing development, to say the least!  But, this creator is not the same as a capital 'C' Creator, such as a god, demiurge, or what have you.  Or, at least the logic here does not entail such a being, though it is getting closer.  We are still a long way from the beings described in the Bible, Koran, and other such religious texts.

However, we do know some things about this creator.  Most importantly, this creator logically precedes chance and necessity.  So, this creator cannot be the gods of the Homeric myths from which our Western culture originated.  The gods of Homer and Hesiod all came from chaos.  But, according to intelligent design, the creator logically cannot come from chaos, since chaos is another name for chance, and the logical progression has already placed the creator at a much prior position to chance.  The creator, in turn, also logically precedes CSI and designers.  If we were to construct a causal chain with all our elements, the creator would have to be at the start of it all.

Another important thing we know, and here is where we start seeing Christological implications, is that such a creator does not introduce an explanatory gap like the designers did.  We designers cannot explain ourselves.  We all seem to come into being at some point.  Logically, something that only contingently exists, i.e. did not exist at one point in time and did exist at another point in time, does not suffice as an explanation for itself.  Otherwise, it might just pop out of existence again, and perhaps pop into existence again a bit later.  Perhaps at the quantum level things work like this, but at our everyday macroscopic level, we don't tend to think things pop in and out of existence without some kind of more fundamental explanation than mere randomness.  Thus, we designers introduce another explanation problem because we cannot create substances.

Notice how this problem disappears with the fifth entity, the creator.  The creator's unique ability is that it can create substances.  All the previous entities in the hierarchy were substances.  In fact, another name for things and entities is substance.  What does this mean?  Why, the creator is itself also a substance.  Logically, this means creators can create creators, and consequently no new explanatory gap is introduced.

Since the explanatory gap disappears with the creator, yet existed with all the other substances in our list, this means the creator is the most fundamental kind of being in our list.  And once we've arrived at the creator, our list appears to be complete.  Of course, we can always imagine even more exotic kinds of entities, but if we did we'd be risking our necks to Dr. Ockham's vicious razor.  As Einstein's famous dictum goes, we've simplified as much as possible, without over simplifying, contrary to how the disciples of chance and necessity reductionism tend to oversimplifying.

At this point, we've established the creator is both the initial being in the causal chain, and the creator completes our list of causal explanations.  Another way we could say this is that the creator is the explanation for everything.   What would we say if we'd discovered the explanation for everything?  I think it'd be fair to say we'd discovered the fundamental truth that explained everything we know.

Now, say you were to meet the creator on the street, and you asked the creator who it is, how could the creator most succinctly identify itself?  Well, the creator would say it is the fundamental truth.  In other words the creator would say "I am The Truth".

Perhaps this is what Jesus meant?

Wednesday, July 18, 2012

Intelligent Design goes beyond information

Whenever we Intelligent Design proponents talk about ID, "information" is a word mentioned quite often.  Information is the key to ID, so it seems.  But, how just how much does information explain in ID?  Is it a necessary or sufficient component of ID?  I claim that information is merely a necessary component, and that information points beyond itself to something even more fundamental.  But first, we must see the ID hierarchy of being.

Let us start with the basics.  We have a universe filled with objects.  Some of these objects are intelligently designed, some are not.  How do we know whether an object is intelligently designed?  By whether is exhibits complex specified information (CSI).  What creates CSI?  Why, an intelligent designer, of course.

But, what or who is the intelligent designer?  That is the perennial question lobbed at us ID proponents.  As I explained previously, the intelligent designer is not necessarily the same as God, and in fact ID can be said to be consistent with atheism:
http://appliedintelligentdesign.blogspot.com/2012/07/intelligent-design-and-atheism-are.html

However, there is still more to be said about the designer.  The big question is, is the designer itself CSI, or something else?  Well, as also discussed previously, CSI cannot explain itself, otherwise we end up with the Dawkins paradox:
http://appliedintelligentdesign.blogspot.com/2012/07/must-designer-be-more-complex-than.html

Consequently, the designer must be something other than CSI.  Yet, the definition of CSI excludes the designer from itself being an agent of chance and necessity, since chance and necessity cannot create CSI.  So, the designer is both other than CSI, and it is also other than chance and necessity.  The designer is a fourth kind entity, and therefore the designer is itself beyond information.

Tuesday, July 17, 2012

Intelligent Design and Atheism are logically consistent

Many people think that Intelligent Design necessarily infers the existence of God.  Therefore, it must count as a religious doctrine since it is necessarily connected to an essential article of all religions.  However, this analysis relies on hasty reasoning, without giving due consideration to the definition of Intelligent Design.

Many also know that Intelligent Design proponents often claim that ID agnostic about the designer, that it only cares about examining the activity of said designer.  Many claim this is disingenuous, since who else could a designer of our world be but the God of the faith traditions?

Maybe the many are correct.  But, let us first take a moment to examine what ID on its own merits entails.  To start with, let us see what the basic claim of ID is.  The basic claim of ID is that only intelligent agency (IA) is capable of creating complex specified information.  This claim takes a bit of unpacking, and has been adequately unpacked in many other arenas, so I will forgo the unpacking for now.

Back with the core claim, then.  Does it imply that the IA must necessarily be God?  Well, no.  There are many other potential IAs that can can account for a given portion of CSI.  For instance,  humans are a great candidate.  In fact, humans account for all the CSI for which we have direct, unequivocal, historical evidence of its creator.

Plainly, then, ID does not necessitate the designer be God.  Otherwise, you'd have to say God created the computer you are reading this article on, instead of a number of American and Chinese researchers and workers.  Perhaps in certain theologies that statement is true, but in the everyday common sense understanding of that statement, it is false that God created your computer.

Now that we have identified that the intelligent designer can be many other agents besides God, it is clear that the initial claim that ID necessarily entailed the designer be God is false.  But perhaps an infinite regress argument gets us back to the necessity of God?  Maybe, but this is not straightforwardly the case.

To understand why, let us consider physics.  In physics, in order for anything to happen, we must at least have matter and energy.  Now, if matter and energy inherently necessitate the existence of God, then physics too is a religious doctrine, and should be removed from the schools.  But, it is not immediately obvious that matter and energy do inherently necessitate the existence of God.  Instead, matter and energy are considered basic substances in our universe.  Kinds of substances that have existed for the entire duration of the universe.

Consequently, there appear to be some things which we don't account for their origin.  If intelligent agency is shown to exist, it may be the same sort of substance.  If ID turns out to be correct, physics may say that there are three basic substances: matter, energy and intelligent agency.  Accordingly, there is no inherent reason to account for the origin of intelligent agency, at least not any more than there is an inherent reason to account for the origin of matter and energy.  And, just as the mere existence of matter and energy does not tend to convince atheists they must become theists, so the mere existence of intelligent agency has no more reason to dissuade atheists from their atheism.

In fact, a number of existing and ancient religious narratives are consistent with a non-theistic existence of intelligent agency.  If we define theism as a belief in an all powerful, all knowing God, such a view of God is fairly recent in the religious timeline.  There are even religions today that don't believe in a deity with omnipotence and omniscience.  Without these qualities, the gods in these religions are essentially super men - humans that have acquired greater than normal powers.  As such, there is no ontological difference between these gods and humans, and in fact ancient myths tell of humans supplanting the domain of the gods.  Consequently, these religious traditions are actually atheistic in the logical meaning of the term, and are additionally consistent with ID.

That being said, while Intelligent Design is logically consistent with such an atheism, it may not be evidentially consistent, such as demonstrated by events like the big bang.  And, upon further reflection, it may also turn out there are deep logical problems with intelligent agency existing without any further explanation.  But, if so, then these same logical problems also apply to the existence of matter and energy, and are not unique to Intelligent Design.

Sketch of experiment using Stylus and Complex Specified Information Collecting (CSIC) to falsify Intelligent Design

The central claim of Intelligent Design is that only intelligent agents can create complex specified information (CSI).  However, Intelligent Design research has not focused very much on how intelligent agents actually impart this information, choosing rather to focus on detecting information in the first place.  This makes sense, since we have to detect the information in order to then determine whether it was deposited by an intelligent agent, or some other causal agent.

In a previous post, I outlined a practical approach of discovering how intelligent agents, such as humans, impart information.  The approach is called Complex Specified Information Collection, and I describe CSIC here:
http://appliedintelligentdesign.blogspot.com/2012/04/background-experiment-and-results-of.html

In CSIC, human interaction is incorporated into an algorithmic search process to add active information to the search process.  The amount of active information imparted can be measured during each stage of the search process to determine just how responsible the intelligent agents are for the information that is in the search.  The concept of active information is explained by Dr. Dembski's work on the No Free Lunch Theorem:
The Search for a Search: Measuring the Information Cost of Higher Level Search


In "The Search for a Search" Dr. Dembski shows that the effectiveness of a search algorithm in finding a target is limited by the amount of active information the search has about the target, which is information that reduces the area the algorithm must search.  Thus, extremely effective search algorithms should correspondingly possess large amounts of active information.

Dr. Dembski, with the aid of Dr. Robert Marks and Mr. Winston Ewert, have shown this correspondence holds true with Avida.  Avida has been purported to generate complex specified information without the intervention of an intelligent agent.  However, Dr. Dembski et al. have shown that the active information was actually front loaded into the search algorithm by the programmer.

ID experimental researcher, Dr. Doug Axe, has been investigating the implications of ID for evolution.  He has developed a cutting edge simulation called Stylus to examine how well functional specification can be acquired through stepwise mutation:
http://www.plosone.org/article/info:doi/10.1371/journal.pone.0002246

The notion of active information also applies to Dr. Axe's stepwise mutation algorithm.  If it ends up being especially effective at creating functional specification, it is possible to take the algorithm apart and examine it piece by piece to see how the active information has been integrated into the algorithm.  With this analysis we have a baseline of active information to characterize the search process.

Now we also have the tools to go the next step and see how intelligent agents can increase the amount of active information in a search.  This is very valuable since we can learn how conditions, behaviors, techniques exhibited by the intelligent agents impact the amount of active information created, if any.  We can also falsify Intelligent design theory by detecting an increase in active information within the search with no corresponding intelligent agent interaction.

Thus, by combining Stylus and CSIC, we have a full fledged scientific experiment with which we can either falsify or confirm intelligent design, as well as gain valuable insights into how intelligent design works.

Friday, July 13, 2012

Intelligent Design Links Epistemology to Ontology (Part 1)

A famous argument against Artificial Intelligence is John Searle's Chinese room argument.  In this argument, a person resides in the room who possesses a perfect ability to translate Chinese character strings to semantically equivalent English character strings.  However, the person himself has no knowledge of Chinese or English.  This argument shows that even observing a presumably intelligence based behavior tells us nothing about whether the responsible agent has intelligence regarding the content of the behavior.  Thus, there is a strict division between the syntax (form) of something and its semantics (meaning).

While a brilliant argument in its own right, it still leaves us with the most important question unanswered, which is, how do I know whether an agent is in fact intelligent?  I don't have any direct insight into the minds of others, the best I can do is infer internal states from external effects.  But, if Dr. Searle's argument is correct, then even if my friends all exhibit behavior I would classify as intelligent, I still have no clue whatsoever whether they are conscious beings or just robots.  I must take a leap of faith to assume that I even have friends, instead of a bunch of fancy mechanical dolls.  Though it is quite disturbing to take this leap of faith, it is rationally unjustified under Ockham's razor.  If in theory intelligent agency is strictly independent from and unnecessary to account for intelligent behavior, then it violates Ockham's razor to infer intelligent agency from intelligent behavior, as the former is unnecessary for the latter.

Thus, even though Dr. Searle's argument has been hailed as a definitive blow against the concept of Artificial Intelligence, it seems an even greater blow against real intelligence.  Due to his argument, we become eternally stuck within a world filled with automatons in which we seem to be the only conscious beings, in other words we become Dwayne Hoover in Vonnegut's "Breakfast of Champions".

Intelligent Design provides the metaphysical equipment to get us out of this dilemma.  Stay tuned to find out how!

Part 2

Thursday, July 12, 2012

Is Neo-Platonism consistent with Intelligent Design?


I don't believe Neo-Platonism, in which Platonic forms have some kind of causal power, whether directly or through an intermediary, is Intelligent Design proper.  This is because the Platonic forms are necessary entities, and insofar they are necessary their instantiation in turn exerts necessity upon the instantiation.

However, if the causal agent that brings a particular object into being is necessitated in doing so, then that object is a product of necessity, at least in some way.  Per the CSI criteria, said object does not possess complexity, as it has a probability of 1.  On the other hand, perhaps it has a probability less than 1 but greater than 0.  In this case, Neo-Platonism does not explain the existence of the object any better than materialism does.

This argument is very similar to the proof that ID implies the supernatural, in which Neo-Platonism is substituted for the physical laws of chance and necessity.

http://appliedintelligentdesign.blogspot.com/2012/07/does-id-imply-supernatural.html

That being said, Platonism does serve a very valuable purpose for Intelligent Design in that it provides an objective specification.

Wednesday, July 11, 2012

SETI needs Intelligent Design

As Dr. Gonzales' and Dr. Richards' show in the book "Privileged Planet" a SETI project with materialistic metaphysics does not have a hope in heaven to expect to find a habitable planet.  The improbabilities involved in the formation of habitable planet in the universe are extremely small, much smaller than the probabilistic resources available in our universe.

Fortunately, the fact we live on a habitable planet within our universe makes the possibility of a trans-universal designer extremely strong.  At least it makes such a possibility the much preferable explanation to a process that works through merely chance and necessity.  If an intelligent designer created our planet, despite the enormous probabilistic barriers to doing so, then this makes the possibility of other habitable planets a live possibility.

This possibility gives SETI a way out of the probabilistic hole they dug for themselves with materialism.  It is only if SETI allows for the possibility of intelligent design that they can justify further searches for other habited planets in the universe.

That is not all.  The very interesting result from "Privileged Planet" is more than the fact our planet is very finely tuned for highly complex living beings.  Additionally, our planet has been uniquely placed and constructed to make the discovery of the universe a tractable task.  Furthermore, the extremely amazing thing is that the finely tuned parameters for life are the same parameters for discovery.  This requires an even higher level of design than if these parameters were independent.

However, this third aspect is not explained by either the function of survival or discovery.  Also, it is both not necessitated by the situation while at the same time exhibiting a highly intelligent design decision.  Since design decisions are made for a reason, we need to identify a third reason for this third feature of our finely tuned abode.

So, what does the convergence between survival and discovery entail?  It entails that wherever there are  habitable planets in the world, they will share a common frame of conceptual reference.  What purpose would a common set of concepts serve?  On the Pioneer space mission, a disc was placed on the craft that attempted to communicate aspects of our world to extraterrestrial beings, and part of this communication made use of concepts that should be universal, such as mathematical concepts.  Or another example, in CS Lewis' "Out of the Silent Planet" the protagonist at one point attempts to communicate with an inhabitant of Mars by making use of common concepts the two share.

With these considerations in mind, one possible explanation for the convergence between survival and discovery is that it provides a means for disparate inhabitants of the universe to understand each other.  Why provide a means of understanding?  Perhaps because the inhabitants are meant to discover each other...

Tuesday, July 10, 2012

Method for Making New Technology from ID


http://appliedintelligentdesign.blogspot.com/2012/07/looking-for-human-engineering-concepts.html

In a number of arguments for ID, such as Dr. Meyer, Dr. Dembski and Dr. Behe, the writers argue that there appears to be intelligent design in nature because natural entities resemble human engineered products, and human engineered products are intelligently designed.

If we look at this relationship formally, then it looks like biological/natural entity -> human engineered object -> intelligent design.  However, what if we reverse the relationship so it looks like intelligent design -> human engineered artifact -> natural entity?  This means if intelligent design theory is true, then perhaps we can use human engineering to predict the kinds of functionality we will find in the natural world.

For instance, consider a city.  To make a functional city, a wide variety of important technologies are required: communication, transportation, plumbing, power plant, fuel, waste disposal, construction, health, etc.

Where might such a system reside within the biological world?  Perhaps the human body?  A human body is composed similarly to a city, and accomplishes a similar function, providing for its own wellbeing and maintenance.  We can look at the body and then look for the components we'd expect in a city.

"But, how does this give us new forms of technology?" you may ask.  Well, for one thing we can look at how the same functionality is implemented in the body and then attempt to reverse engineer it to improve our existing technology.  There is currently a technology company doing exactly this by reverse engineering the motor in the bacterial flagellum.  Our body contains much, much more than new kinds of motors, however.  It has instantaneous communication systems, extremely effective repair and construction systems, and is amazingly effectively at extracting and utilizing energy from food.

That is not all.  Furthermore, we can learn of new kinds of technology.  While our body shares similar systems with cities, it also has systems and capabilities we have not thought of before that equally apply to a city.  And this is where the real promise lies in the technique of learning from nature.

Here are a couple examples to get things started.
- While our cities today have a very limited utility system, the body not only distributes power and water, but the body also has a system of very decentralized manufacturing.  Today, we are rapidly nearing the point where people can manufacture objects within their homes.  The innovation within the body is there is also a resource utility system, such that the basic resources for manufacturing are distributed throughout the body.  The same could be done within cities.
- All the systems in the body are autonomously run and optimized by a central processing unit, to which our cities have no existing analog, but would be very useful.
- The body is maintained by what amount to robots.  While manning our cities with robots is not possible yet, the body presents a example on how it can be done.
- The body is able to turn its fuel waste into a form that can be recycled right back into fuel again.

It has been noted by a number of scientists that biological organisms appear to be phenomenally optimized for their particular function.  The technology in nature far exceeds any technology we have ever been able to create.  If we can perfect our ability to reverse engineer natural systems, our modern technology will grow by incredible leaps and bounds, since we can see from nature that we are only beginning to touch what is technologically possible.  There is an enormous wealth of technological innovation that is ours for the taking, to which we have been blinded ever since the industrial revolution by Darwinism and methodological naturalism.

However, with the introduction of Intelligent Design, our blinders have been removed.  Forget mining for gold and diamonds.  Forget pumping oil.  We have vast untapped quantities of the most valuable resource in our universe: CSI.

Still feeling skeptical?  Well, ask yourself what the greatest technological innovation has been of the past century that has revolutionized our entire world.  It is information technology.  But where has information technology existed since the beginning of life?  Why, within the very DNA and protein production that make up all organism.  Think how much sooner we could have invented computers if we'd only known where to look.

What untold other technologies are lying out in the natural world just waiting for us to discover them?  It is time to look for Intelligent Design!

ID Implies the Supernatural


Here is a first order predicate logic proof which shows ID implies a supernatural causal agent.  The agent is properly called "supernatural" because it possess an ability that is above nature.

Legend

------

V x p : universal quantification - for all x proposition p holds

~p : not p

p1 ^ p2 : conjunction

p1 v p2 : disjunction

p1 -> p2 : implication



The following is how new universal propositions are introduced using temporary variables.

The | indicates that the corresponding line is part of a subproof.

The numbers on the right indicate which prior lines, and laws/identities, are used to generate the current line. 

1. | x0 ^ y0                                        x0, y0

   --

2. | x0                                             1

3. V x, y ( x ^ y -> x )                            1-2



Proof

-----

P(x): x is an entity that is entirely controlled by physical laws

C&N(x):  x is an entity that operates by events entirely describable by chance and necessity

C&NO(x): x is an entity whose origin can be explained by proximate chance and necessity causes

CSI(x): x contains CSI

C(x, y): x is the proximate cause of y



Goal: V x, y ( C(x, y) ^ CSI(y) -> ~P(x) )



1.  V x ( P(x) -> C&N(x) )

2.  V x ( CSI(x) -> ~C&NO(x) )

3.  V x, y ( C&N(x) ^ C(x, y) -> C&NO(y) )



4.  | C&N(x0) ^ C(x0, y0) ^ CSI(y0)                 x0, y0

    --

5.  | CSI(y0)                                       4

6.  | ~C&NO(y0)                                     2, 5

7.  | C&NO(y0)                                      3, 4

8.  | _|_                                           6, 7



9. V x, y ( ~(C&N(x) ^ C(x, y) ^ CSI(y)) )          4-8

10. V x, y ( ~C&N(x) v ~C(x, y) v ~CSI(y) )         9 & De Morgan's Law



11. | C(x0, y0) ^ CSI(y0)                           x0, y0

    --

12. | ~C&N(x0)                                      10, 11

13. | ~P(x0)                                        1 & Modus Tollens, 12



14. V x, y ( C(x, y) ^ CSI(y) -> ~P(x) )     11-13


UPDATE:

Elsewhere, an author claims this argument shows ID is committed to substance dualism.  

http://www.jackscanlan.com/2011/08/homologous-legs-now-has-a-facebook-page-its-2011-right/

While I agree substance dualism is the easiest way to make sense of this argument, it is still not a necessary conclusion.  For example, Mormon theology maintains there is a way for matter to be intelligent and possess free will.  Such a claim is not logically incoherent (as far as I know), so here we have a materialistic theory of intelligence that is compatible with my argument above.

However, a materialistic theory of intelligence would still be supernatural, since intelligent matter is beyond the physical laws of chance and necessity.

Subjective specification makes CSI non-functional

It has been claimed a number of times that the specification in complex specified information (CSI) is subjective, at least to some degree.  The claim is that even though it is possible to quantify specification, to some degree, it is still at some level dependent upon a human domain of discourse for its description.

However, if it is true that specification is always subjective this presents a problem.  To understand why, we must examine the problem CSI is attempting to solve.

In bygone days, creationists used to argue that the universe must have been designed because many parameters are so finely tuned, and the combination of so many independent finely tuned parameters is vastly improbable.  While intuitively this seems like a knock down drag out argument, there is a logical fallacy at work.

Take a jar of multicolor marbles.  It happens to be configured in such a way that the marbles spell out the word "Supercalifragilisticexpialidocious" around the edge of the jar.  However, mathematically speaking, this particular configuration, while very improbable, is just as probable as any other configuration of marbles in the jar.  Yet, it is a necessity that one of these improbable configurations is instantiated in the jar, regardless of whether through material or intelligent causation.  As such, the mere fact that the configuration is improbable says nothing about the causal agency that brought it about.

CSI gets around this problem by placing a non-uniform probability distribution on sets of marble configurations.  So, while the particular configurations are equally probable in themselves, the probability of a configuration coming from a particular set varies.  Thus, with a the right choice of  specification, it is now possible to exclude either a material or intelligent causal agent as being responsible for the marble configuration.

But, how do we choose the correct specification?  The problem is that there are many possible specifications for the marble configurations.  In fact, there is a specification for each possible probability distribution.  The information conferred by picking one of these specifications must be averaged out over all possible choices of specifications.  Unfortunately, this brings us right back to square one, and each marble configuration again regains an equal, though very low, probability, when averaged over all choices of specifications.

This is where the problem with a subjective specification comes into play.

We might say that the selection of specification is up to the group of scientists detecting intelligent design, so even though syntactically speaking one specification is as preferable as any other, there is an external mapping of value to each specification (i.e. semantics) held by the scientists that makes one specification more valuable than another.  Furthermore, this mapping is held by the vast majority of people, and they all agree the mapping is objective.

Unfortunately, while this may well be true, such a mapping begs the question if it meant to support a scientific claim, since the mapping is no longer a strictly scientific claim.  Science, at least of the hard science variety, decides matters of qualification using quantification.  It proposes a variety of models to fit observed data, and uses model fitting (i.e. linear regression) to determine which model is best.  In this way, it is possible to make objectively true statements about the physical world, which is at the level of mathematical veracity, given the premises are true.

However, in the case of CSI, the choice of specification is what determines the answer to whether a particular object is designed.  Two choices of specification can give two completely different answers. Thus, if the choice of specification is determined by some factor completely external to the marble configurations under consideration, then this may be equivalent to fitting the data to the model, since the specification chosen is part of the data being measured.  Fitting the data to the model does not count as science.

As a result, even though it is not necessarily the case that an externally determined specification is fitting the data to the model, it remains a live possibility.  As long as there is no objective way to eliminate this possibility, then by the principle of maximum entropy both the possibility of fitting the data to the model and fitting the model to the data must be given equal weighting.  If both possibilities are given equal weighting then ultimately the CSI metric cannot give a positive reading, inherently cannot discriminate between design and non-design, and therefore does not count as a scientific claim.

The only way the CSI metric can count as science is if there is an objective way to choose a specification.

Monday, July 9, 2012

Capitalism Needs Intelligent Design

Let us say that economic models fall along a spectrum from a totally centralized to a totally decentralized economies.  The question is, where on this spectrum do we see the greatest source of wealth for human prosperity?  Well, this question is answered in turn by a prior question of whether wealth is of a bounded quantity, or unbounded quantity.

If wealth is bounded, and there is only a finite amount of wealth to go around in this world, then the haves are always taking from the have nots.  This is a great injustice, and the only just economy that provides prosperity for the most people, even though it is quite limited, is a centrally planned economy that shares out the wealth equally amongst all.

However, suppose wealth is unbounded.  Suppose new wealth can be created at any given time to improve the lot of man.  Then, a more decentralized economic model is more just than a centralized model because the decentralized model produces more wealth and benefits a greater number of people. In fact, a centralized model in this case becomes inherently unjust because it is artificially and needlessly restricting how much wealth can benefit mankind, and thus leading to greater poverty and misery.

What is the nature of wealth, bounded or unbounded?  Within the modern paradigm of materialism, the answer must be bounded.  Matter abides by strict conservation laws, and while resources may get shuffled around, there is no such concept of "creation."  As such, within a materialistic worldview, the most just economy is a centralized economy.  Therefore, capitalism is unjustified within a materialistic worldview.

On the other hand, if we adopt a worldview that includes intelligent design, then creation becomes a live option.  We gain the metaphysical possibility of wealth generation.  How so?  According to ID, intelligent agents are defined by their ability to originate information, not merely pass it from one location to another.  Consequently, it is only within an ID friendly worldview that capitalism gains the necessary metaphysical properties to make sense of an decentralized economic model based on wealth creation.

Saturday, July 7, 2012

Must the designer be more complex than the design?

One of Dr. Richard Dawkins' favorite arguments against Intelligent Design's coherence is that ID does not explain complexity because the designer must be even more complex than the design.

Why does Dawkins say this?  For instance, he believes great complexity comes from very simple origins through the process of Darwinistic evolution.  But, for some reason introducing a designer implies greater preceding complexity.

While I am not sure why Dawkins makes this claim, I can address a reason why he may, and the problem with this reason.

First, he may be thinking of the designer as some complex physical entity, such as a factory.  In this case it is quite obvious that to generate even a seemingly simple object such as a pencil we need an enormously complicated number of processes and mechanisms.  So, it is quite easy for me to accept that if the designer were like a factory, then it in turn would require even more explanation than the pencil.  In which case, intelligent design theory would not be very helpful.

However, intelligent design theory does not say the designer is like a factory.  In fact, it precludes the designer from being like a factory.  To see this, we must examine the core concept of ID, which is complex, specified information.

Complex, specified information (CSI) is a mathematical quantification of an entity.  The two criteria for an entity to possess CSI is that it must be highly unlikely (complex) given the environment in which it came to exist, while also precisely and concisely described by a specification that is independent from its environment.

For a causal agent to be the intelligent designer responsible for the CSI in the entity, the entity must abide by the two criteria in the context of being generated by a particular agent.  Take the pencil factory as an example, since pencils are quite evidently designed, are relatively complex and can be described quite simply as an erasing and writing instrument.  Can we say the pencil factory is the designer of the pencil?

Well, let's look at the criteria for CSI.  Would we consider it highly unlikely for a pencil factory to generate pencils?  Probably not, unless it is a particularly bad pencil factory.  Next, do we consider the pencil's description to be independent from the factory?  In other words, does the pencil factory produce something that is better described as totally unlike a pencil?  Again, probably not, unless it is a particularly bad pencil factory.  As such, the pencil factory cannot be said to be the designer of the pencil.

In this way we see that if the supposed intelligent designer is to a designed entity like a pencil factory is to a pencil, then ID states the supposed intelligent designer is not the real designer.  Rather than disagreeing with Dawkins, ID actually agrees and says the designer cannot be like a pencil factory in relation to its design, ever growing in complexity.

Instead, the designer must be quite independent from its design.  This means that a design implies nothing about the complexity of the designer.  While it may well be the case that the designer is more complex, this is not necessitated by ID and the designer may be much, much simpler than the design.

In fact, the foregoing argument logically entails that there is more to intelligent design than just complex specified information:
http://appliedintelligentdesign.blogspot.com/2012/07/intelligent-design-goes-beyond.html

Monday, April 30, 2012

Background, experiment and results of CSI Collecting (CSIC)

The essential idea of CSI Collecting is to incorporate human pattern detection to improve the capabilities of search and optimization algorithm.

The question is, do humans have the ability to detect patterns any better than computers?  If not, then any advantage human interaction brings is equivalent to some computer algorithm.  Hence, there is nothing especially unique about human involvement, and there is no guarantee it will bring an advantage to a particular problem.

On the other hand, if humans do have a pattern recognition capability that is non-computational then we can expect human involvement to bring an advantage to all problems where an advantage is possible.  There are problems where there is no other way to solve the problem than by trying all combinations, and in such problems human interaction will bring no advantage whether such interaction is non-computational or not.

Why would we think a non-computational ability exists in humans?  This question is a bit more involved to answer.  The answer is a hypothesis, and is based on a couple assumptions which are potentially testable.

The starting point is Dr. Dembski's paper "A search for a search".  This paper states two important points: 1) a search process cannot produce more information than is originally placed in the search algorithm, and 2) a search for search also cannot produce more information than is place in the initial search algorithm.  The term information in these papers refers to a metric of how much more likely a search algorithm will find a search target than a purely random sample based search.

According to these two points, as long as the only means we have of inserting information into a search process is another search process, then we can never insert information into a search process.  This means that for all problem domains any search will perform no better than a random sample search.  The only way to get information into a search process is by a non-search.  However, all algorithms that are capable of inserting information can be characterized as searches.  Thus, the source of information must ultimately be non-algorithmic.  Since all computers are algorithmic, this means that the source of information must also be non-computational.

The one known source of non-algorithmic information is intelligence.  According to intelligent design, intelligence can create information, in particular, complex specified information.  However, is complex specified information the same kind of information that a search process requires?

The mathematical definition of complex specified information is approximately the amount of specificational resources multiplied by the number of specifications we are interested in divided by the number of possible specifications.  This characterization can be fit to the search process.  The range of potential solutions that a search will examine is equivalent to the number of possible specifications.  The target set of solutions is equivalent to the specifications we are interested in.  The specificational resources are the number of solutions sampled by the search process.  If the search process selects the target specifications more often than we would expect given their proportion to the number of potential specifications, then when we calculate the CSI formula on the search process, it will contain information.

Now if we invert the argument that CSI implies intelligence, then we can say intelligence can potentially, but not necessarily, impart information into a search process.  As we saw when comparing the definition of CSI to information in a search process, if information is imparted into a search process, it will find its target more often than without the information.  So, if an intelligent being, such as a person, is involved in a search process, then the intelligent being can cause any given search algorithm to perform better than mathematically possible.

This is the theoretical basis behind the CSI Collecting (CSIC) method.  Since humans have some sort of ability to insert information into any search algorithm, then there must be a general methodology for allowing humans to interact with search algorithms.  Does such a methodology exist in modern computer science?  Currently, there are a number of researchers are attempting to incorporate human interaction into search processes, the most well known being the Google search engine, which uses our search patterns and web page linking to improve our searches results.  However, none of these methods attempt to establish a generalized approach.  The general field is known as human computation, and there is currently no theoretical foundation for the field that is compatible with intelligent design's dual claims that algorithms cannot create information and that intelligent agents can.

Now, you may be asking, given that CSIC seems to be a unique approach to human computation, how do I plan to test my hypothesis, and what are my results?  First, I must admit there are practical problems to testing the efficacy of human computation.  The most glaring problem is distinguishing between A) a general non-algorithmic ability to improve a search algorithm and B) happenstance that an algorithmic ability improves a search algorithm.  In theory the two cases are quite distinct: there will always be instances of search algorithms where A improves the search and B does not, for any type of algorithmic ability in B.  In practice it is not possible to test an ability over all possible problems, we can only test a very small subset of all problems.  However, the good thing is that there are many finite problem subsets that have approximately the same characteristic as the set of all problems.  So, if I pick such a subset, while I cannot categorically state that the tested ability falls in A instead of B, I can make such a claim with a strong degree of confidence.

Which subset of problems have the necessary characteristic?  It is the subset of problems that the No Free Lunch Theorem (NFLT) applies, or almost applies (ANFLT).  Unfortunately, the set of NFLT problems is not very large.  Fortunately, the nest of ANFLT problems is very large.  In fact, most problems of interest quite likely fall into this set because they are NP complete or harder.  So, as long as I pick a problem that is ANFLT, I can at least give a confidence rating to my hypothesis, as well as potentially generate useful results.

The second practical issue is determining whether humans categorically contributed to the search process, or I just picked a bad search algorithm.  This case too I cannot say definitively whether my algorithm is to blame or not.  The best I can do is give a degree of confidence.  And in my case, I cannot even mathematically quantify this confidence since my current work is merely a first pass to see if the idea shows promise.

The general technique I use is as follows.  I use a standard multi-objective genetic algorithm.  A genetic algorithm takes a set of solutions, measures how good each solution is according to some fitness valuation function, varies the solutions to generate a new set using variation operators such as mutation and crossover, and then from both sets of solutions selects a final set according to some critera.  The algorithm then reiterates this process on each subsequent final set until a stopping criteria is reached.  The solutions themselves are represented to the algorithm as fixed length bit strings.  The one innovation I add to create CSIC is to allow a person to provide input for the selection phase.  The person can only see the bit string solution and its fitness valuation.

The metric of comparison between the human and the genetic algorithm is the number of unique solution valuations needed to improve on the best found solution to date.  The search (algorithmic or human manual) process requiring fewer unique solution valuations to find a better solution is considered the superior search process.

My first problem domain consisted of scheduling member roles for multiple club meetings.  This is a form of the well known scheduling problem, and as such is harder than NPC.  My method was to run the genetic algorithm until it stopped finding better solutions for a large number of iterations.  Then I manually attempted to find better solutions.  In about 0.05% of the fitness valuations the algorithmic search used and was still unable to find better solutions, I was able to find three superior solutions.

My second problem domain of choice is that of coming up with the primes that generate an RSA public private key pair.  This domain is much harder than NPC, and as such is quite likely an ANFLT domain.  I ran the algorithmic search algorithm in tandem with 500 attempts by humans using the Amazon Mechanical Turk service.  The humans contributed 2.4% of the overall set of superior solutions.

These results are still preliminary, and there is much hard mathematical work to be done to quantify specifically the degree to which humans contributed, and whether this contribution is statistically significant enough to validate the ID hypothesis.  However, at the very least, it is clear that humans can contribute to algorithmic optimization when given only the exact same data available to the algorithm.  So, the initial results show promise, however so slight.