Knowledge Chaining With Technology

by Wei Jing HO – Friday, 4 October 2024, 3:38 PM

Number of replies: 5

This is an interesting topic that requires me to re-read a few times just to make sure I roughly understand what is going on.

From a logical pragmatic view point, Coherentism and Foundationalism makes more sense to me.

Giving a scenario of payment for food in a university canteen.

10 years ago, my belief will be that payment is done in cash. Using Coherentism to trace this belief will expose that it is part of a larger set of belief about university’s/society’s payment practices.

In present time, if I return to visit my alumni, they have moved to cashless transactions, requiring me to install mobile app into my phone and link to my bank account. This new truth will need to be updated as a new belief in my knowledge construct of how to pay at a certain place.

Foundationalism seems to work on some sort of practice that can be established as prior knowledge from which you chain your understanding of the world. This is logical as well. I have seen countless parents giving toddlers money so they can practice counting and buying things. They don’t have to explain in long chains of evidence and justification of how the payment systems works, they make the kids learn by practice.

——————————————————————————————————————————-

Regarding Infinitism, as a one-dimensional assessment of it, it is noted to be unsustainable. After all, humans can’t see so far to chain so much knowledge. Each of us are contained knowledge generators, as society’s knowledge gets build and updated, new humans produce knowledge and beliefs from new starting points, with certain things to be viewed as facts to build new knowledge upon.

To ask humans to validate beliefs based on infinite chain of justification is impossible, but it maybe possible for future forms of AI technology. Modern AI that learns from data requires data, information and knowledge to be stored.

From this moment on, as long as humans keep updating their knowledge in secure centralised digital base, there could be a common source for future generations of humans using more powerful AI to trace back beliefs and knowledge to past original points, spanning linkages that may appear infinite / too many for any limited human beings to track.

We sit on the shoulders of those before us. We cannot see, feel, know what they know, but from their formulation, we lift ourselves up, and subsequent generations of humans will continue in their own way, working to expand and deepen the human consciousness.

We are limited constructs, but we can be empowered through ICT technologies e.g., memory technologies, AI technologies.

We can take inspiration from old ideas like Infinitism which were put aside due to do infeasibility, and imagine how they may potentially be weaved into new information systems designs for binding knowledge together and retracing their origin points.


Re: Knowledge Chaining With Technology

by P. D. – Sunday, 6 October 2024, 1:08 AM

Your optimistic view on the possibility of overcoming the limitations of infinitism through technology is inspiring, but several key dimensions need to be considered:

  • Technological limitations and data:
    • Even though AI can process vast amounts of data, it depends on the quality and availability of that data. If the input data is incomplete or biased, it can lead to incorrect conclusions.
    • Storing all human knowledge in a centralized database presents enormous security, ethical, and other challenges.
  • Human vs. artificial cognition:
    • AI can assist with data analysis but may not be able to understand context or meaning in the way humans do.
    • Justifying beliefs is not just about tracing information back but also about critically evaluating their validity and relevance.
  • Practical application of infinitism:
    • Even with advanced technologies, the question remains whether infinite regress truly provides better justification or whether it simply pushes the problem further.
    • Relying on infinite chains of justification can lead to decision paralysis because there will always be another level to examine.
  • Ethical and philosophical implications:
    • Increased reliance on AI for justifying our beliefs may weaken our ability for independent critical thinking.
    • There is a risk that technology will dictate what is considered true or justified, which can impact freedom of thought.

Your reflections open an interesting discussion about connecting epistemology with modern technologies. However, it’s important to be cautious about how much technology can truly solve deep philosophical problems like infinitism. It might be more appropriate to focus on strengthening coherentism and foundationalism through technology, rather than attempting to surpass human limitations via infinite chains of justification. Technology can be a powerful tool, but it should not replace critical thinking and the human capacity for understanding.


Re: Knowledge Chaining With Technology

by James Carmichael – Tuesday, 8 October 2024, 3:44 AM

Wei Jing, Petr, thanks for this discussion. (And, I know that you’re both probably onto week 4! I’m catching up; work.) I wanted to flag up something that seems self-evident to me and that I *think*, from what you’re writing, is equally self-evident to both of you — but the main reason I want to flag it up is to check on that. Maybe I just think that this is self-evident and neither of you agree! (To quote Petr from another thread: “Relying on beliefs considered self-evident can be problematic because what is self-evident to one person may not be so to another.” wink) So! It is: 

But, the problem with infinitism isn’t just functional or pragmatic, right? The issue isn’t just that our brains can’t keep track of an infinite regress; the issue is that a definition of knowledge that has no foundation strikes many people as fundamentally *unsatisfactory*. Wei Jing, you clearly know much more about this than I do, but that’s one of the interesting things about the last few years’ developments in AI, right? At least in terms of industrial development, we’ve moved totally away from models that attempt to ground machine learning or thinking in first principles and to what are confusing-for-our-purposes-here sometimes called foundation models, but are the *opposite* of that from an epistemological perspective in that they are rooted in *no* foundational principles but rather on the correlations and statistical relationships that emerge within large data sets. I think that this invites an important reappraisal about the nature of knowledge, especially as AI comes closer and closer to manifesting human-like ‘intelligence’ and ‘behaviors’, but that one reason it invites that reappraisal is that it challenges foundationalism (again, in the epistemic sense; again, it’s very confusing that the models are called that, for our purposes here!).

To return to my main point: IS it self-evident to you both that the problem with infinitism isn’t mainly functional, but that, rather, one at least *might* feel that there’s something fundamentally incomplete about it? Or, am I leaping to conclusions about what is self-evident and what is not based upon my personal priors and responses?


Re: Knowledge Chaining With Technology

by Wei Jing HO – Sunday, 13 October 2024, 2:28 PM

Hi Petr and James, thanks for providing feedback that made me think more! Was also busy at work so have been catching up as well.

Petr is correct about the limitations. For Computer Science and Artificial Intelligence domains, we are one of the youngest and most active human knowledge domain at this period of time. However the growth in these areas came from contributions of knowledge from other older human domains e.g., Philosophy, Math, Economics. We are also limited in what we can do and create based on the advancement of the “body” – the Engineering and Electronics domains. In a sense the “body” limits what the “brain” or “software” can do.

One behavior from Computer Science and Artificial Intelligence domains is that the generation of certain fundamental concepts usually happened before the electronics body was ready. e.g., Neural Networks (https://en.wikipedia.org/wiki/History_of_artificial_neural_networks). When the “body” is ready, people test out the old concepts and sometimes it catches fire when it works, growing and spreading rapidly. The next growth of electronics e.g., quantum computing may allow other older concepts to be probable. But as Petr noted, we need to consider if it has any practical context or ethics infringement.

Thanks James as well for your framing, yes I was exploring Infinitism with technology as a maybe idea. The electronics “body” may one day allow for “seemingly” infinite backward chaining of associated knowledge. For what purpose I don’t know, but to just point out the potential. I agree with James when that time comes, someone probably needs to relook into whether the concept of Infinitism may find completion with newer AI forms.


Re: Knowledge Chaining With Technology

by David Laflamme – Monday, 14 October 2024, 2:02 AM

I would like to voice my support for Petr’s point of human versus artificial cognition. My view is one of definition. Knowledge can only be held in a human consciousness, whereas, machines can hold only information. Keeping a data base for future generations is a lofty goal and I am not denigrating it. My analogy would be our DNA. It is replete with information but achieves expression through the interaction of a living host with their environment. Thanks for listening.


Re: Knowledge Chaining With Technology

by James Carmichael – Thursday, 17 October 2024, 2:24 AM

Thanks for responding, Wei Jing, and for bringing up these engaging considerations.


Leave a Reply

Your email address will not be published. Required fields are marked *