Thoughts on AI
-
- Site Admin
- Posts: 2177
- Joined: Fri Jun 14, 2002 2:26 pm
- Real name: Frank Sanns
Thoughts on AI
The world is ablaze with AI. Is it great or is it scary? What can it and what can it not do?
Back in the 1980s, the research facility I was working at had a fiber optic connection to a mainframe installed. It was a marvel to say the least. The top scientists and statisticians were gathered as we began to probe this new capacity and how it could streamline our experiments.
An experimental design was setup and we ran strict sets of experiments. The outcome results were gathered and run through the mainframe. The results were astonishing! I was recognized as one of the top experts in one of the areas that we were probing and had solved problem after problem in the field and had a huge success rate on predictions and outcomes. However, on this day, the computer saw something that I did not see. The correlation coefficients were 0.99 which means outstanding correlations. I still remember that room with all gathered around the terminal. You could hear and see the astonishment and excitement over the results. With my own astonishment, I looked at the results in awe and realized that I had missed the direction to success for solution to the problem. I vividly remember that sinking feeling inside that I no longer had relevance; the Captain Dunsel (Star Trek TOS reference).
I was joyed but dejected at the same time. My words into the room that day were: "The computer can see order where the human mind cannot."
As the marveling continued and our new direction clear, I started to run scenarios in my head of what the computer had found and what its results would open up for and entire field of study. I looked at the data and the graphs and extrapolated to where it would take us. It did not make sense to me. What did I miss?, I thought. A few moments later, the answer became clear to me.
The computer had chosen the most common, LOW, MEDIUM, and HIGH levels of the variables. This is a quite common technique in experimental design so the computer was right. Still, the results did not make sense to only me as the others were still marveling.
It turns out that the linear and symmetrical spacing of the variable levels, gave a linear response in the result. At least that is what the computer thought. In reality, one of the key results was not linear at all. It was a curve but with only three points, the computer fitted a straight line with a high correlation coefficient. The result was actually getting worse rather that the computer prediction that it would get better.
I presented my information to others who were at first skeptical but soon completely saw the error that the computer made. These were smart people and even they were being fooled. By the end of the session, my quote was: "The computer can see order where there is no order."
The relevance here is that AI is only as good as the input. It cannot solve what the human mind cannot especially with all of the incompleteness and errors in results that are fed into AI. Something like chess where it can play itself is an ideal use of AI. Fixing photos and faking videos, excellent. Solving quantum gravity strong electroweak curved space is something else entirely.
Lastly is confidence. Asking one of the AI engines that everybody is marveling about a question will result in an answer. The answer is given as gospel. Unless of course you give an argument back. Sometimes the AI adjusts and gives more information or it capitulates. You can sometimes even get it to switch agreement and disagreements with answer and give apologies.
Which brings me to the point of confidence. How to know if the answer being given is correct given the rapidly and confidence that ChatGTP and similar engines respond? The Dunning-Kruger curve is relevant to all of the internet "experts" as well as the AI engines. A little bit of knowledge gives high confidence. It sounds like the loosely informed and AI engines know definitively what they are parroting back but there is a very high probably that some of the answers are completely rubbish. Yet the voice gives a highly confident answer.
Dunning-Kruger says that if you know nothing, you have no confidence to answer a question. With a little knowledge, you and the AI think you are experts and answer definitely and stand your ground but you have a very high chance on not really knowing what you are talking about. More knowledge means you know how much you do not know and your confidence in your answer drops. Then when you become a true expert in a particular field, then your confidence goes back up.
At this point, much of the internet and AI are in the dangerous high confidence, low knowledge portion of the D-K curve. Insert my comments at this point that "A Computer can see order where there is no order". i.e, it is wrong. QED
Back in the 1980s, the research facility I was working at had a fiber optic connection to a mainframe installed. It was a marvel to say the least. The top scientists and statisticians were gathered as we began to probe this new capacity and how it could streamline our experiments.
An experimental design was setup and we ran strict sets of experiments. The outcome results were gathered and run through the mainframe. The results were astonishing! I was recognized as one of the top experts in one of the areas that we were probing and had solved problem after problem in the field and had a huge success rate on predictions and outcomes. However, on this day, the computer saw something that I did not see. The correlation coefficients were 0.99 which means outstanding correlations. I still remember that room with all gathered around the terminal. You could hear and see the astonishment and excitement over the results. With my own astonishment, I looked at the results in awe and realized that I had missed the direction to success for solution to the problem. I vividly remember that sinking feeling inside that I no longer had relevance; the Captain Dunsel (Star Trek TOS reference).
I was joyed but dejected at the same time. My words into the room that day were: "The computer can see order where the human mind cannot."
As the marveling continued and our new direction clear, I started to run scenarios in my head of what the computer had found and what its results would open up for and entire field of study. I looked at the data and the graphs and extrapolated to where it would take us. It did not make sense to me. What did I miss?, I thought. A few moments later, the answer became clear to me.
The computer had chosen the most common, LOW, MEDIUM, and HIGH levels of the variables. This is a quite common technique in experimental design so the computer was right. Still, the results did not make sense to only me as the others were still marveling.
It turns out that the linear and symmetrical spacing of the variable levels, gave a linear response in the result. At least that is what the computer thought. In reality, one of the key results was not linear at all. It was a curve but with only three points, the computer fitted a straight line with a high correlation coefficient. The result was actually getting worse rather that the computer prediction that it would get better.
I presented my information to others who were at first skeptical but soon completely saw the error that the computer made. These were smart people and even they were being fooled. By the end of the session, my quote was: "The computer can see order where there is no order."
The relevance here is that AI is only as good as the input. It cannot solve what the human mind cannot especially with all of the incompleteness and errors in results that are fed into AI. Something like chess where it can play itself is an ideal use of AI. Fixing photos and faking videos, excellent. Solving quantum gravity strong electroweak curved space is something else entirely.
Lastly is confidence. Asking one of the AI engines that everybody is marveling about a question will result in an answer. The answer is given as gospel. Unless of course you give an argument back. Sometimes the AI adjusts and gives more information or it capitulates. You can sometimes even get it to switch agreement and disagreements with answer and give apologies.
Which brings me to the point of confidence. How to know if the answer being given is correct given the rapidly and confidence that ChatGTP and similar engines respond? The Dunning-Kruger curve is relevant to all of the internet "experts" as well as the AI engines. A little bit of knowledge gives high confidence. It sounds like the loosely informed and AI engines know definitively what they are parroting back but there is a very high probably that some of the answers are completely rubbish. Yet the voice gives a highly confident answer.
Dunning-Kruger says that if you know nothing, you have no confidence to answer a question. With a little knowledge, you and the AI think you are experts and answer definitely and stand your ground but you have a very high chance on not really knowing what you are talking about. More knowledge means you know how much you do not know and your confidence in your answer drops. Then when you become a true expert in a particular field, then your confidence goes back up.
At this point, much of the internet and AI are in the dangerous high confidence, low knowledge portion of the D-K curve. Insert my comments at this point that "A Computer can see order where there is no order". i.e, it is wrong. QED
Achiever's madness; when enough is still not enough. ---FS
We have to stop looking at the world through our physical eyes. The universe is NOT what we see. It is the quantum world that is real. The rest is just an electron illusion. ---FS
We have to stop looking at the world through our physical eyes. The universe is NOT what we see. It is the quantum world that is real. The rest is just an electron illusion. ---FS
- Richard Hull
- Moderator
- Posts: 15336
- Joined: Fri Jun 15, 2001 9:44 am
- Real name: Richard Hull
Re: Thoughts on AI
I was stunned to see the curve in the graph. As an electronics engineer, the tunnel diode biasing curve immediately came to mind. The tunnel diode never made it big, but was initially touted as the future of electronics. As with all useful electronic active components it offers negative resistance in its application. The issue is that current amplification occurs over a useful microampere range only, and the bias is hyper critical and lies between a fraction of a volt and no more than 1.5 volts, as a rule.
It is fabulous as a single component, 2 lead amplifier and finds its greatest use as an ultra high frequency oscillator. As it was never produced in any quantity to speak of, replacement components of this failed marvel are currently in the $100 range. Surplus tunnel diodes can be had on e-bay for a few bucks. The GE 1N3714 is a good one to experiment with.
Below is the curve with the shaded area being the hyper narrow negative resistance bias area as noted above.
Comparing the curves, the tunnel diode's useful area occurs between the "This is more complicated than I thought" and the pit where you realize just how little you know. Oddly both of these plots are where the real usefulness occurs and where learning more destroys your confidence level. The true trip to begin real study and expert level learning begins. A valuable stage for both the tunnel diode and proper learning.
Richard Hull
It is fabulous as a single component, 2 lead amplifier and finds its greatest use as an ultra high frequency oscillator. As it was never produced in any quantity to speak of, replacement components of this failed marvel are currently in the $100 range. Surplus tunnel diodes can be had on e-bay for a few bucks. The GE 1N3714 is a good one to experiment with.
Below is the curve with the shaded area being the hyper narrow negative resistance bias area as noted above.
Comparing the curves, the tunnel diode's useful area occurs between the "This is more complicated than I thought" and the pit where you realize just how little you know. Oddly both of these plots are where the real usefulness occurs and where learning more destroys your confidence level. The true trip to begin real study and expert level learning begins. A valuable stage for both the tunnel diode and proper learning.
Richard Hull
- Attachments
-
- Tuunel diode.png (3.27 KiB) Viewed 4872 times
Progress may have been a good thing once, but it just went on too long. - Yogi Berra
Fusion is the energy of the future....and it always will be
The more complex the idea put forward by the poor amateur, the more likely it will never see embodiment
Fusion is the energy of the future....and it always will be
The more complex the idea put forward by the poor amateur, the more likely it will never see embodiment
- Dennis P Brown
- Posts: 3583
- Joined: Sun May 20, 2012 10:46 am
- Real name: Dennis Brown
Re: Thoughts on AI
The very fact that it is called AI gives away the game; people that create this software know full well it is not and isn't even remotely like intelligence. Strange but for some reason, people fall for this nonsense.
Ignorance is what we all experience until we make an effort to learn
-
- Posts: 108
- Joined: Fri Nov 25, 2022 9:25 am
- Real name: Ryan Ginter
Re: Thoughts on AI
The current situation on fusor.net equates well to what's going on with AI. A copy rarely ever exceeds the original. With only a handful of exceptions, practically all fusors being built by current members of the forums are nothing more than reproductions of the methods paved out by the forum's early fusioneers. As you would expect of a mere copy, none of this work is advancing the collective knowledge base. No value is truly gained by the community, it only serves as a sort of catching up for the individual retracing the work of others.
In this same way, AI is merely a copy of the collective mind of humanity. Why should we expect a greater level of intelligence to come about from it when it's but our reflection, to be more specific, a copy of the average human's written works? The creators of AI do their best to uplift it from the comments of the many fools online by managing its training, but even in the best-case scenario, we shouldn't expect any LLM to exceed human capabilities.
There is only one metric in which the AI can outperform that which they were trained on, and that is speed. The rate at which humans can experience time is based on the speed with which our neurons can reset and ready themselves for another action potential. The time spent during ion movement is imperceptible to that neuron. Of course it's not quite that simple, as it's the collective of neurons as a system that experiences time, but the limitation in its simplest form is still due to the number of action potentials per second. Computers on the other hand do not have this limitation. The desktop computer found in your home will comfortably run billions of calculations per second. The supercomputers running the LLMs will operate at many magnitudes more.
Because of this, AI essentially experiences time slower than humans, providing it the capacity to conduct more "thought" in the same amount of time. Mind you I'm anthropomorphizing the AI, as we all know it doesn't truly think. With this difference in speed of thought, the AI is capable of generating answers within a few seconds that might take a human over a minute to match. That being said, that still doesn't give the AI the capacity to come up with a better answer than a well-learned human could, provided they were given more time.
The primary issues I see with these LLMs are that the average person is overly willing to accept the generated answer and that the AI doesn't make any attempts to contemplate its work. The human mind is naturally self-doubting. The all-or-nothing nature of neuron action potentials creates complex voting systems where some neurons vote yes and cause a reduction in voltage to the dendrites of the next neuron, and some vote no, causing an increase in voltage. If the voltage fails to reach its required threshold, no signal will fire. In this way, the system of the human mind will weigh many different thoughts and ideas against one another. The system is far too complex to make simple analogies for, but essentially there is always an element of self-doubt generated from different parts of the mind, ready at a moment's notice to make the person turn back and reconsider their answer. The human will also continue to cast self-doubt after concluding, whereas an AI ceases all work once the requested information has been provided. I understand that the neural networks of current supercomputers attempt to replicate the voting system of the mind, but it's clear that they lack the larger structural systems of human thought required to truly be self-aware and reflect.
As Frank has said, it is the confidence of their answers that proves the greatest threat. I fear the Dunning-Kruger effect may need to be modified for the average person moving forward. The first peak in confidence may now very well extend down to zero knowledge, as people will type in their heavily biased prompts with zero self awareness, then point to their screens and say "See, the machine says so".
In this same way, AI is merely a copy of the collective mind of humanity. Why should we expect a greater level of intelligence to come about from it when it's but our reflection, to be more specific, a copy of the average human's written works? The creators of AI do their best to uplift it from the comments of the many fools online by managing its training, but even in the best-case scenario, we shouldn't expect any LLM to exceed human capabilities.
There is only one metric in which the AI can outperform that which they were trained on, and that is speed. The rate at which humans can experience time is based on the speed with which our neurons can reset and ready themselves for another action potential. The time spent during ion movement is imperceptible to that neuron. Of course it's not quite that simple, as it's the collective of neurons as a system that experiences time, but the limitation in its simplest form is still due to the number of action potentials per second. Computers on the other hand do not have this limitation. The desktop computer found in your home will comfortably run billions of calculations per second. The supercomputers running the LLMs will operate at many magnitudes more.
Because of this, AI essentially experiences time slower than humans, providing it the capacity to conduct more "thought" in the same amount of time. Mind you I'm anthropomorphizing the AI, as we all know it doesn't truly think. With this difference in speed of thought, the AI is capable of generating answers within a few seconds that might take a human over a minute to match. That being said, that still doesn't give the AI the capacity to come up with a better answer than a well-learned human could, provided they were given more time.
The primary issues I see with these LLMs are that the average person is overly willing to accept the generated answer and that the AI doesn't make any attempts to contemplate its work. The human mind is naturally self-doubting. The all-or-nothing nature of neuron action potentials creates complex voting systems where some neurons vote yes and cause a reduction in voltage to the dendrites of the next neuron, and some vote no, causing an increase in voltage. If the voltage fails to reach its required threshold, no signal will fire. In this way, the system of the human mind will weigh many different thoughts and ideas against one another. The system is far too complex to make simple analogies for, but essentially there is always an element of self-doubt generated from different parts of the mind, ready at a moment's notice to make the person turn back and reconsider their answer. The human will also continue to cast self-doubt after concluding, whereas an AI ceases all work once the requested information has been provided. I understand that the neural networks of current supercomputers attempt to replicate the voting system of the mind, but it's clear that they lack the larger structural systems of human thought required to truly be self-aware and reflect.
As Frank has said, it is the confidence of their answers that proves the greatest threat. I fear the Dunning-Kruger effect may need to be modified for the average person moving forward. The first peak in confidence may now very well extend down to zero knowledge, as people will type in their heavily biased prompts with zero self awareness, then point to their screens and say "See, the machine says so".
- Richard Hull
- Moderator
- Posts: 15336
- Joined: Fri Jun 15, 2001 9:44 am
- Real name: Richard Hull
Re: Thoughts on AI
The pity is many who seek answers on the internet may, in future, have no inkling of whether they are talking to a real viable learned human source or an AI source. Double edged swords can be very dangerous things.
I liked the part about human reflection and doubt being inherent in our internal system. This comes from what is hoped a guided life with many sensory inputs. True lessons learned via fear of bad outcomes past, present and future which we know are part of life. We hopefully learn control as we accumulate and process knowledge during our lives. Thus far, this is found only among the living. Thus, we come to accept the good, bad, and ugly, steering ourselves in and amongst our peers. Out of this living of life we hope to be the best we can be. A purely human journey.
At present there is no true self-awareness in AI networks, if there ever is, the lack of real world inputs of pain and sensory perception might result in a self contained, blind, unfeeling entity, capable of God only knows what.... We know! ..... We have seen living people like this!
Richard Hull
I liked the part about human reflection and doubt being inherent in our internal system. This comes from what is hoped a guided life with many sensory inputs. True lessons learned via fear of bad outcomes past, present and future which we know are part of life. We hopefully learn control as we accumulate and process knowledge during our lives. Thus far, this is found only among the living. Thus, we come to accept the good, bad, and ugly, steering ourselves in and amongst our peers. Out of this living of life we hope to be the best we can be. A purely human journey.
At present there is no true self-awareness in AI networks, if there ever is, the lack of real world inputs of pain and sensory perception might result in a self contained, blind, unfeeling entity, capable of God only knows what.... We know! ..... We have seen living people like this!
Richard Hull
Progress may have been a good thing once, but it just went on too long. - Yogi Berra
Fusion is the energy of the future....and it always will be
The more complex the idea put forward by the poor amateur, the more likely it will never see embodiment
Fusion is the energy of the future....and it always will be
The more complex the idea put forward by the poor amateur, the more likely it will never see embodiment
-
- Posts: 108
- Joined: Fri Nov 25, 2022 9:25 am
- Real name: Ryan Ginter
Re: Thoughts on AI
This is a point I had overlooked, but it is quite possibly the most significant and concerning among all issues presented. These LLMs have no inherent aversion to bad outcomes. As you've stated, AI lacks life experience. A human will make many decisions throughout their life. At first, most will be very poor decisions but overtime we learn to avoid such choices due to the negative sensations they bring upon ourselves and those close to us. It is the brains self-regulating system of pleasure and pain coupled with our sensory inputs and the past inputs which we call memory that shape who we are.
Such a system isn't present in generative AI. They seek only to calculate the most probable response based on training data.
Such a system isn't present in generative AI. They seek only to calculate the most probable response based on training data.
- Richard Hull
- Moderator
- Posts: 15336
- Joined: Fri Jun 15, 2001 9:44 am
- Real name: Richard Hull
Re: Thoughts on AI
It is our ability to think very deeply, as represented above, both passionately and dispassionately and with some feeling for what is termed humanism that sets us apart from any current "smart, well trained" AI.
Richard Hull
Richard Hull
Progress may have been a good thing once, but it just went on too long. - Yogi Berra
Fusion is the energy of the future....and it always will be
The more complex the idea put forward by the poor amateur, the more likely it will never see embodiment
Fusion is the energy of the future....and it always will be
The more complex the idea put forward by the poor amateur, the more likely it will never see embodiment
-
- Posts: 108
- Joined: Fri Nov 25, 2022 9:25 am
- Real name: Ryan Ginter
Re: Thoughts on AI
Current, as you have highlighted, being the word of importance here. I have no doubt that one day a machine could be designed to think, feel, and reflect to a greater degree than any person alive today. The existence of AI itself is not the issue, it's the unapologetic willingness its developers display to force it into every aspect of life in pursuit of monetary gain.
Such actions fail to acknowledge the negative impacts such a change could bring about in the world. AI, like any tool, can be utilized for our benefit but regulations should be in place before its wide spread adoption. Of course the rush is a greed fueled race to see which company will be the first to control the market.
I do hope such pursuits will be to the benefit of all, but I am sceptical.
Such actions fail to acknowledge the negative impacts such a change could bring about in the world. AI, like any tool, can be utilized for our benefit but regulations should be in place before its wide spread adoption. Of course the rush is a greed fueled race to see which company will be the first to control the market.
I do hope such pursuits will be to the benefit of all, but I am sceptical.
- Richard Hull
- Moderator
- Posts: 15336
- Joined: Fri Jun 15, 2001 9:44 am
- Real name: Richard Hull
Re: Thoughts on AI
That double edged sword thing, again....
Pitifully, we may yet see the effort end up to our detriment, ending in tears.
Richard Hull
Pitifully, we may yet see the effort end up to our detriment, ending in tears.
Richard Hull
Progress may have been a good thing once, but it just went on too long. - Yogi Berra
Fusion is the energy of the future....and it always will be
The more complex the idea put forward by the poor amateur, the more likely it will never see embodiment
Fusion is the energy of the future....and it always will be
The more complex the idea put forward by the poor amateur, the more likely it will never see embodiment
-
- Posts: 2
- Joined: Mon Oct 07, 2024 6:29 am
- Real name: Ryan Hulke
Re: Thoughts on AI
Wanted to post my perspective on this as a younger person with a (brief) software background -
The models you are seeing in ChatGPT etc that allegedly cannot reason are based on the Transformer architecture. When training billions of parameters (if your unfamiliar with AI, think of parameters like synapses in the brain) on the entirety of the text on the internet, the Transformer is essentially a compression algorithm for the collective knowledge of humanity. You are not wrong when you say that it is memorizing and regurgitating facts and language patterns it has seen, and you can demonstrate this by asking it a tricky question it likely has never seen (or by asking something the tokenizer obscures, such as "how many R's are in the word 'strawberry'". This was a notorious example for a while; The model tokenizes words into 1 or a few high dimensional vectors each, so it does not see the actual letters).
However, the model also memorizes basic reasoning patterns from the internet. This can be seen by the simple fact that adding the phrase "Think step by step" to your prompt significantly increases the likelihood it gets to the right answer on some problems (per https://arxiv.org/pdf/2205.11916). Once researchers understood this, they began post-training models to have a tendency to implement those reasoning patterns often, without being prompted to. For example, it can be trained to bring up relevant first principles knowledge to a difficult problem, such as the underlying equations of a physics problem, and then build up from there, verify at each step, backtrack when it makes a mistake, etc. instead of just trying to predict the answer in one token. The latest extension of this idea is OpenAI's new o1-preview model they just released in September - rather than pre-training it to memorize more stuff from the internet, they post-train it on reasoning patterns using a novel Reinforcement Learning technique, which is where you set the AI agent in a given environment and let them interact with the environment and figure out the best way to get from A to B, or in this case, from a hard question to an answer. Then at inference time when users ask it a question, the model generates and explores reasoning tokens in the background before it gives you a final answer. O1-preview is already solving many of the reasoning tasks that were previously impossible for Language Models, and this is the just first version of such a model. Here is Google DeepMind's publication on this idea if you want to take a look - https://arxiv.org/pdf/2408.03314
So even if its "just memorizing" and "doesn't actually understand what it's doing", if it can memorize reasoning patterns (which is essentially what humans do anyway - although its partially encoded in our DNA, you can think of that as evolution's way of memorizing reasoning patterns based on past living beings' experiences, and we're just born with those memories), then why wouldn't it be able to do any task a human can do, and more?
The models you are seeing in ChatGPT etc that allegedly cannot reason are based on the Transformer architecture. When training billions of parameters (if your unfamiliar with AI, think of parameters like synapses in the brain) on the entirety of the text on the internet, the Transformer is essentially a compression algorithm for the collective knowledge of humanity. You are not wrong when you say that it is memorizing and regurgitating facts and language patterns it has seen, and you can demonstrate this by asking it a tricky question it likely has never seen (or by asking something the tokenizer obscures, such as "how many R's are in the word 'strawberry'". This was a notorious example for a while; The model tokenizes words into 1 or a few high dimensional vectors each, so it does not see the actual letters).
However, the model also memorizes basic reasoning patterns from the internet. This can be seen by the simple fact that adding the phrase "Think step by step" to your prompt significantly increases the likelihood it gets to the right answer on some problems (per https://arxiv.org/pdf/2205.11916). Once researchers understood this, they began post-training models to have a tendency to implement those reasoning patterns often, without being prompted to. For example, it can be trained to bring up relevant first principles knowledge to a difficult problem, such as the underlying equations of a physics problem, and then build up from there, verify at each step, backtrack when it makes a mistake, etc. instead of just trying to predict the answer in one token. The latest extension of this idea is OpenAI's new o1-preview model they just released in September - rather than pre-training it to memorize more stuff from the internet, they post-train it on reasoning patterns using a novel Reinforcement Learning technique, which is where you set the AI agent in a given environment and let them interact with the environment and figure out the best way to get from A to B, or in this case, from a hard question to an answer. Then at inference time when users ask it a question, the model generates and explores reasoning tokens in the background before it gives you a final answer. O1-preview is already solving many of the reasoning tasks that were previously impossible for Language Models, and this is the just first version of such a model. Here is Google DeepMind's publication on this idea if you want to take a look - https://arxiv.org/pdf/2408.03314
So even if its "just memorizing" and "doesn't actually understand what it's doing", if it can memorize reasoning patterns (which is essentially what humans do anyway - although its partially encoded in our DNA, you can think of that as evolution's way of memorizing reasoning patterns based on past living beings' experiences, and we're just born with those memories), then why wouldn't it be able to do any task a human can do, and more?
- Paul_Schatzkin
- Site Admin
- Posts: 1112
- Joined: Thu Jun 14, 2001 12:49 pm
- Real name: aka The Perfesser
- Contact:
AI for Fusor?
Speaking of AI (but not really addressing the philosophical challenges discussed in this thread...)
I had a thought near the end of the day last Saturday at Richard's about applying AI to this site.
Is there any way to add a "layer" or an "interface" to a site like this so that the database could be mined with direct questions?
Often newcomers arrive with questions, and the answer is "read the FAQs."
With an AI interface, a use could ask their question: "what do I need to create a vacuum" - and the AI could scour the facts and give him the answer. That would eliminate the new user's need to poke and scroll to find the answer they're looking for.
I did talk to Andrew Robinson about that, he's helped us with some technical challenges in the past, he said he'd think about it, but I though I'd pose the question for the rest of the group here, too.
And then it dawns on me... "why don't you just ask AI...????"
Maybe I'll do that later today.
--PS
I had a thought near the end of the day last Saturday at Richard's about applying AI to this site.
Is there any way to add a "layer" or an "interface" to a site like this so that the database could be mined with direct questions?
Often newcomers arrive with questions, and the answer is "read the FAQs."
With an AI interface, a use could ask their question: "what do I need to create a vacuum" - and the AI could scour the facts and give him the answer. That would eliminate the new user's need to poke and scroll to find the answer they're looking for.
I did talk to Andrew Robinson about that, he's helped us with some technical challenges in the past, he said he'd think about it, but I though I'd pose the question for the rest of the group here, too.
And then it dawns on me... "why don't you just ask AI...????"
Maybe I'll do that later today.
--PS
Paul Schatzkin, aka "The Perfesser" – Founder and Host of Fusor.net
Author of The Boy Who Invented Television
"Fusion is not 20 years in the future; it is 60 years in the past and we missed it."
Author of The Boy Who Invented Television
"Fusion is not 20 years in the future; it is 60 years in the past and we missed it."