{"id":177,"date":"2024-12-23T13:24:45","date_gmt":"2024-12-23T13:24:45","guid":{"rendered":"https:\/\/averageone.com\/educationBlog\/?p=177"},"modified":"2024-12-23T13:24:46","modified_gmt":"2024-12-23T13:24:46","slug":"theory-of-knowledge-module-10-deduction-and-induction","status":"publish","type":"post","link":"https:\/\/averageone.com\/educationBlog\/2024\/12\/23\/theory-of-knowledge-module-10-deduction-and-induction\/","title":{"rendered":"Theory of Knowledge Module: [10] Deduction and Induction"},"content":{"rendered":"\n<h3 class=\"wp-block-heading\">Inference in Expert, Machine Learning and Deep Learning Systems<\/h3>\n\n\n\n<p><em>by\u00a0<a href=\"https:\/\/michaelmas2024.conted.ox.ac.uk\/user\/view.php?id=1626&amp;course=48\">Wei Jing HO<\/a>\u00a0&#8211;\u00a0Saturday, 9 November 2024, 3:41 PM<\/em><\/p>\n\n\n\n<p><em>Number of replies: 3<\/em><\/p>\n\n\n\n<p><strong>Understanding the &#8220;problem of induction&#8221; has economic value<\/strong>&nbsp;when&nbsp;<strong>accessing<\/strong>&nbsp;AI technologies or&nbsp;<strong>designing<\/strong>&nbsp;AI systems.<\/p>\n\n\n\n<p>This is especially for expert systems or machine learning\/deep learning systems.<\/p>\n\n\n\n<p>Rationality and inference plays a foundation role for AI studies. For instance:-<\/p>\n\n\n\n<p><strong>Expert Systems<\/strong>&nbsp;have four basic components:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Knowledge\u00a0<\/strong>base<\/li>\n\n\n\n<li><strong>Inference\u00a0<\/strong>engine<\/li>\n\n\n\n<li>User interface<\/li>\n\n\n\n<li>Explanation facility<\/li>\n<\/ul>\n\n\n\n<p>For designing and building Expert Systems, understanding&nbsp;<strong>deduction is important<\/strong>&nbsp;as engineers use that to&nbsp;<strong>form logical rules to reach certain conclusions<\/strong>&nbsp;based on what was understood. They are&nbsp;<strong>good for automating specialised operations<\/strong>&nbsp;<em>e.g., Mycin (<a href=\"https:\/\/exhibits.stanford.edu\/feigenbaum\/browse\/the-mycin-experiments\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/exhibits.stanford.edu\/feigenbaum\/browse\/the-mycin-experiments<\/a>)<\/em><\/p>\n\n\n\n<p><strong>Machine (ML) and Deep Learning (DL) Systems<\/strong>&nbsp;differs somewhat from Expert systems, being data-driven rather than rule based:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Training\u00a0<\/strong>dataset (+ Validation\/Test dataset)<\/li>\n\n\n\n<li><strong>Model\u00a0<\/strong>architecture (trained model)<\/li>\n\n\n\n<li>App feature or APIs<\/li>\n\n\n\n<li>Black box or statistical\/approximated explainable AI (XAI) output\u00a0<\/li>\n<\/ul>\n\n\n\n<p>For designing and building Machine (ML) and Deep Learning (DL) Systems, understanding<strong>&nbsp;induction is important<\/strong>&nbsp;as the models used for the&nbsp;<strong>machine learning process is intended to generalise patterns<\/strong>&nbsp;from cleaned data.<\/p>\n\n\n\n<p>As such the&nbsp;<strong>&#8220;problem of induction&#8221; defines the limitation of ML and DL systems<\/strong>. Because in a living world the&nbsp;<strong><em>patterns will change over time<\/em><\/strong>&nbsp;and there is a need to consistently collect and compile new training data to retrain the machine learning model.&nbsp;<em>e.g., IBM&#8217;s Drift Evaluations (<a href=\"https:\/\/dataplatform.cloud.ibm.com\/docs\/content\/wsj\/model\/wos-monitor-drift.html?context=cpdaas&amp;audience=wdp\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/dataplatform.cloud.ibm.com\/docs\/content\/wsj\/model\/wos-monitor-drift.html?context=cpdaas&amp;audience=wdp<\/a>)<\/em><\/p>\n\n\n\n<p>That&#8217;s why vendors that promised a powerful AI product but sell it as a once-off item without maintenance and proposed periodic retraining of AI model is suspect and should be carefully assessed. Same as humans, AI modelled on past data\/info\/knowledge without regular maintenance\/retraining will lose accuracy and doesn&#8217;t perform optimally over time.<\/p>\n\n\n\n<p>As of current time, the powerful AI systems are usually of hybrid architectures, while deduction and induction plays major roles, abduction also have a part to play especially in situations where the system have to reply to something with incomplete information or contain queries out of its trained scope or rules. In such situation, like Sherlock, the AI system has to make its best guess to reply the query.<\/p>\n\n\n\n<p>For engineers we never go deep in the philosophy side, but it is good to understand where the foundation of our &nbsp;knowledge domain come from. On another thread in this forum, James&#8217; observation of the circularity nature of knowledge I feel is quite relevant.<\/p>\n\n\n\n<p>I am no philosopher, but from how AI systems are designed, I will assume&nbsp;<strong>Deduction<\/strong>&nbsp;helps us to acquire specialised knowledge,&nbsp;<strong>Induction&nbsp;<\/strong>helps us to formulate generalised knowledge, while&nbsp;<strong>Abduction&nbsp;<\/strong>helps us face the vast world of knowns-unknowns with partial information so that we can make the best guess to continue on?<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Re: Inference in Expert, Machine Learning and Deep Learning Systems<\/h3>\n\n\n\n<p><em>by\u00a0<a href=\"https:\/\/michaelmas2024.conted.ox.ac.uk\/user\/view.php?id=1628&amp;course=48\">Kimberly Inge<\/a>\u00a0&#8211;\u00a0Sunday, 10 November 2024, 4:43 AM<\/em><\/p>\n\n\n\n<p>This is so fascinating, Wei Jing. Does AI have to be explicitly taught whether to use deduction, induction, or abduction in a particular situation? In other words, does it have to have explicit programming or &#8220;experience&#8221; to determine which way of reasoning will help it perform optimally?<\/p>\n\n\n\n<p>Kimberly<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"yui_3_18_1_1_1734959988507_27\">Re: Inference in Expert, Machine Learning and Deep Learning Systems<\/h3>\n\n\n\n<p><em>by\u00a0<a href=\"https:\/\/michaelmas2024.conted.ox.ac.uk\/user\/view.php?id=1626&amp;course=48\">Wei Jing HO<\/a>\u00a0&#8211;\u00a0Sunday, 10 November 2024, 3:15 PM<\/em><\/p>\n\n\n\n<p>Hi Kimberly, this is a great question, in general when we learn inference for AI studies, they are meant to be a form of theoretical frameworks to help the human engineers in the design and process to build specific reasoning systems. But we don&#8217;t actually &#8220;teach&#8221; AI system how to use deduction, induction or abduction processes, not like how humans teach each other new skills and knoweldge.<\/p>\n\n\n\n<p>Instead the way AI system uses inference types is usually pre-embedded into the system&#8217;s design or the model&#8217;s architecture. Unless there was a user interface designed for humans to &#8220;teach&#8221; the AI certain stuff it will only be accessible at the backend of the system (tied with the tech provider) for their fine-tuning and adjustment purpose. The exact system and process will not be visible to consumer and maybe deliberately kept opaque because it may tie back to the AI company&#8217;s tradesecret.<\/p>\n\n\n\n<p>There could be proprietary systems that is as you describe when learning and decision-making is more autonomous, but I am not aware of any at this time. Such designs might be more emerging than mainstream so a patent search could see if any such AI architectures are being explored.<\/p>\n\n\n\n<p>One good textbook to explain AI foundations is&nbsp;<strong>&#8220;Artificial Intelligence: A Modern Approach&#8221; &#8211; Stuart J. Russell and Peter Norvig<\/strong><\/p>\n\n\n\n<p>They outline 4 goals to pursue in AI development:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Systems that Think like Humans<\/li>\n\n\n\n<li>Systems that Think Rationally<\/li>\n\n\n\n<li>Systems that Act like Humans<\/li>\n\n\n\n<li>Systems that Act Rationally<\/li>\n<\/ul>\n\n\n\n<p><strong>Weblink<\/strong>:&nbsp;<a href=\"https:\/\/people.eecs.berkeley.edu\/~russell\/intro.html\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/people.eecs.berkeley.edu\/~russell\/intro.html<\/a><\/p>\n\n\n\n<p>AI is a large field and to build something it helps to define a focus first and explore from there. When it comes to reasoning in AI:<\/p>\n\n\n\n<p><strong>For Traditional AI<\/strong><\/p>\n\n\n\n<p>To craft &#8220;explicit instructions&#8221; for machine to follow as rules, this will likely require the human designers and engineers to be deductive then inductive. As designers and programmers we will first craft the rules and logic of a system, inductive comes later in the process to improve the system based on observed data.<\/p>\n\n\n\n<p><strong>For Machine Learning and Deep Learning Systems<\/strong><\/p>\n\n\n\n<p>The process is inverted, beginning with inductive (train model with data), after which the humans step in to assess via deductive reasoning if the predictions of the model makes sense.<\/p>\n\n\n\n<p>I find this video from Code.org quite useful to break down the difference:&nbsp;<a href=\"https:\/\/studio.code.org\/s\/oceans\/lessons\/1\/levels\/1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/studio.code.org\/s\/oceans\/lessons\/1\/levels\/1<\/a><\/p>\n\n\n\n<p>But to be fair, I may mix up what deductive and inductive means sometimes. I think most engineers don&#8217;t really define their process of reasoning when building things. I usually just follow a tried and workable process &amp; framework and start doing\/building things \ud83d\ude05 my apologies if I made any errors in my assumptions and explanation.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Re: Inference in Expert, Machine Learning and Deep Learning Systems<\/h3>\n\n\n\n<p><em>by\u00a0<a href=\"https:\/\/michaelmas2024.conted.ox.ac.uk\/user\/view.php?id=1626&amp;course=48\">Wei Jing HO<\/a>\u00a0&#8211;\u00a0Sunday, 10 November 2024, 3:50 PM<\/em><\/p>\n\n\n\n<p>After thinking a bit, what you describe might be agent systems.<\/p>\n\n\n\n<p>Before the AI hype, agent systems are usually developed for games, as game AIs, the older designs are usually embedded with explicit rules&#8230; I think. Basically the agents could be tied to certain events, (event listeners) when triggered, activate game AIs that will carry out certain tasks.<\/p>\n\n\n\n<p>More recently OpenAI have exploration in this space, but their agents might be based ard inductive designs.&nbsp;<a href=\"https:\/\/openai.com\/index\/emergent-tool-use\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/openai.com\/index\/emergent-tool-use\/<\/a><\/p>\n\n\n\n<p>Something like Final Fantasy&#8217;s XIII customisable Paradigm Shift system that influence internal game AI maybe more deductive:&nbsp;<a href=\"https:\/\/youtu.be\/PnPuxgMBcCg?feature=shared\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/youtu.be\/PnPuxgMBcCg?feature=shared<\/a><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n","protected":false},"excerpt":{"rendered":"<p>Inference in Expert, Machine Learning and Deep Learning Systems by\u00a0Wei Jing HO\u00a0&#8211;\u00a0Saturday, 9 November 2024, 3:41 PM Number of replies: 3 Understanding the &#8220;problem of induction&#8221; has economic value&nbsp;when&nbsp;accessing&nbsp;AI technologies or&nbsp;designing&nbsp;AI systems. This is especially for expert systems or machine learning\/deep learning systems. Rationality and inference plays a foundation role for AI studies. For instance:- [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":178,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[16,12,11,15,14],"tags":[],"class_list":["post-177","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-archives","category-epistemology","category-philosophy","category-theory-of-knowledge","category-university-of-oxford"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/averageone.com\/educationBlog\/wp-json\/wp\/v2\/posts\/177","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/averageone.com\/educationBlog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/averageone.com\/educationBlog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/averageone.com\/educationBlog\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/averageone.com\/educationBlog\/wp-json\/wp\/v2\/comments?post=177"}],"version-history":[{"count":1,"href":"https:\/\/averageone.com\/educationBlog\/wp-json\/wp\/v2\/posts\/177\/revisions"}],"predecessor-version":[{"id":179,"href":"https:\/\/averageone.com\/educationBlog\/wp-json\/wp\/v2\/posts\/177\/revisions\/179"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/averageone.com\/educationBlog\/wp-json\/wp\/v2\/media\/178"}],"wp:attachment":[{"href":"https:\/\/averageone.com\/educationBlog\/wp-json\/wp\/v2\/media?parent=177"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/averageone.com\/educationBlog\/wp-json\/wp\/v2\/categories?post=177"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/averageone.com\/educationBlog\/wp-json\/wp\/v2\/tags?post=177"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}