فلسفه هوش مصنوعی

از ویکی‌پدیا، دانشنامهٔ آزاد
پرش به ناوبری پرش به جستجو
فارسیEnglish

هوش مصنوعی پیوندهای نزدیکی با فلسفه دارد، زیرا هر دو در مفاهیم متعددی مانند هوش، عمل، هشیاری، معرفت‌شناسی و حتی اختیار مشترکند.[۱] علاوه بر این، این تکنولوژی در صدد ایجاد حیوانات مصنوعی یا آدم‌های مصنوعی (یا حداقل مخلوقات مصنوعی) است،[۲] از این روی فلاسفه توجه شایانی به رشته هوش مصنوعی نشان می‌دهند.[۲] این عوامل باعث ظهور فلسفه هوش مصنوعی شده‌است. برخی پژوهش‌گران بحث می‌کنند که رد کردن و ناوارد دانستن فلسفه توسط جامعه هوش مصنوعی مضر است.[نیازمند منبع]

هدف از مطالعهٔ فلسفهٔ هوش مصنوعی، یافتن رابطهٔ میان ربات‌ها و تفکر، و همچنین تلاش برای پاسخ دادن به چنین سوالاتی است:

  • آیا یک ربات توانایی عملکردی هوشمندانه را دارد؟ آیا او می‌تواند هر مسئله‌ای را که شخصی با فکر کردن حل می‌کند حل کند؟
  • آیا هوش انسان و هوش ربات یکسانند؟ آیا در اصل، مغز انسان یک رایانه است؟
  • آیا ربات می‌تواند فکر داشته باشد؟ آیا می‌تواند همان هشیاری و حالات ذهنی ای که انسان داراست را داشته باشد؟ آیا می‌تواند چگونگی چیزها را حس کند؟

سه سؤال بالا، در واقع بیانگر انشعاب علایق پژوهشگران حوزهٔ هوش مصنوعی، فیلسوف‌ها و دانشمندان علوم شناختی است.

پاسخ به این سؤال‌ها مستلزم آن است که ما چطور واژه‌های «هوش» و «هشیاری» را معنی کنیم، و اینکه بدانیم دقیقاً چه نوع ربات‌هایی مورد مطالعه قرار داده می‌شوند.

گزاره‌های مهم در فلسفه هوش مصنوعی شامل این موارد می‌شوند:

  • «قرارداد مودبانه» تورینگ: اگر ماشینی به هوشمندی یک انسان رفتار کند، آن گاه به هوشمندی یک انسان است.[۳]
  • گزاره دارتموث: «هر جنبه ای از یادگیری یا هر ویژگی دیگری از هوشمندی می‌تواند چنان به دقت توصیف شود که ماشینی بتواند برای شبیه‌سازی آن ساخته شود.»[۴]
  • فرضیه سیستم نماد فیزیکی (به انگلیسی: Physical symbol system) نیول و سایمون: «یک سیستم نماد فیزیکی اسطاعت لازم و کافی برای عمل هوشمند کلی را برخوردار است.»[۵]
  • فرضیه هوش مصنوعی قوی سرل: «کامپیوتر به درستی برنامه‌ریزی شده با ورودی‌ها و خروجی‌های صحیح بدانگونه دقیقاً به همان معنا که انسان‌ها ذهن دارند ذهنی خواهد داشت.»[۶]
  • مکانیسم هابز: «زیراکه 'تعقل' [reason] … چیزی نیست جز 'محاسبه' [reckoning]، که جمع و تفریق کردن است، [جمع و تفریق کردنِ] عواقب اسامی کلی‌ای که به منظور 'نشانه‌گذاری' [marking] و 'معنی‌بخشیدن' [signify] افکارمان بر آن‌ها توافق شده‌است.»[۷]

آیا یک ماشین (یا ربات) می‌تواند هوشمند باشد؟[ویرایش]

آیا ممکن است روزی ماشینی ساخته شود، که تمام مشکلاتی که بشر قادر به حل آن است را با هوش خود از میان بردارد؟ این سؤالی است که پژوهشگران حوزهٔ هوش مصنوعی علاقه‌مندند، پاسخی به آن بدهند. این پاسخ گسترهٔ توانایی ربات‌ها را در آینده مشخص کرده و مسیر پژوهشگران هوش مصنوعی را راهنمایی می‌کند. این تنها به رفتار ربات‌ها ارتباط داشته و تفکر روانشناسان، دانشمندان علوم شناختی و فیلسوف‌ها را را مورد بررسی قرار نمی‌دهند. برای پاسخ به این سؤال، لزومی ندارد که یک ماشین واقعاً همانطوری‌که یک انسان فکر می‌کند، فکر کند یا اینکه ادای فکر کردن را در بیاورد. جایگاه اصلی پژوهشگران هوش مصنوعی، در این جمله که در طرح پیشنهادی Dartmouth Conferences در سال ۱۹۵۶ مطرح شده‌است خلاصه می‌گردد: هر جنبه‌ای از یادگیری، یا دیگر خصوصیات هوش را می‌توان چنان بدقت تشریح کرد که یک ماشین (ربات) بتواند آن را شبیه‌سازی کند. بحث و جدل علیه قضیهٔ اصلی باید نشان دهد که به وجود آوردن سامانهٔ پویای هوش مصنوعی امکان ندارد. چرا که در حال حاضر توانایی‌های کامپیوترها، دارای یک سری محدودیت‌هایی است؛ یا اینکه توانایی‌های شگرفی برای اندیشیدن در ذهن انسان وجود دارد که هنوز، ماشین‌ها (یا شیوه‌هایی که پژوهشگران هوش مصنوعی در این رابطه پیش گرفته‌اند) قادر به پردازش آن‌ها نیستند و بحث در این خصوص باید مهر تأییدی بر غیر عملی بودن این سامانه باشد.

نخستین گام برای پاسخ به این سؤال این است که «هوش» را به روشنی تعریف کرد.

هوش[ویرایش]

بررسی هوشمند بودن کامپیوتر (آزمایش تورینگ)[ویرایش]

آلن تورینگ[۸] در مقالهٔ مشهور و حائز اهمیت سال ۱۹۵۰ میلادی، مسئله هوش را به پرسشی ساده دربارهٔ توانایی مکالمه مربوط کرد.[۹] پیشنهاد وی این بود: اگر یک ماشین قادر باشد که به تمامی پرسش‌هایی که از آن می‌شود پاسخ دهد، و از همان کلماتی استفاده کند که یک انسان معمولی استفاده می‌کند، آنوقت می‌توان آن ماشین را هوشمند دانست. نمونهٔ مدرن طرح وی را می‌توان در تالارهای برخط گفتگو جستجو کرد؛ جایی که یکی از دو شرکت‌کننده، انسانی حقیقی و دیگری برنامه‌ای کامپیوتری است. برنامهٔ کامپیوتری هنگامی می‌تواند از این آزمون سربلند بیرون بیاید که هیچ‌کس نتواند بین آن و انسان تمییز قائل شود. تورینگ، خاطر نشان کرد که هیچ‌کس (به غیر از فلاسفه) هرگز سؤالی با این مضمون مطرح نکرده‌است که: «آیا مردم می‌توانند فکر می‌کنند؟» وی می‌نویسد: «بجای اینکه مدام دربارهٔ این موضوع بحث کنیم، عادی خواهدبود اگر یک قراردادِ معقول بداشته باشیم مبنی بر اینکه همهٔ فکر می‌کنند.» و آزمون تورینگ، این قراردادِ معقول را به ربات‌ها هم بسط داد.

اگر یک ماشین، بمانند انسان، هوشمندانه عمل کند، آن هنگام است که می‌توان گفت بمانند انسان، هوشمند است.

مقایسهٔ هوش انسان با مفهوم کلی هوش[ویرایش]

یک نقد دربارهٔ آزمون تورینگ این است که این آزمون، کاملاً انسان نماست. اگر هدف نهایی ما خلق ماشین‌هایی است که هوشمندانه تر از انسان‌ها عمل کنند، چرا بر این امر تأکید داریم که ماشین‌ها باید دقیقاً شبیه به انسان باشند؟ به گفتهٔ راسل و نوروینگ، متون نوشته شده توسط مهندسان علم هوانوردی، نمی‌تواند تعریف درستی برای تولید ماشین‌هایی باشد که درست مانند کبوترها پرواز کنند، به‌طوری‌که دیگر کبوترها نیز فریب بخورند. در پژوهش تازه‌ای که در حوزهٔ هوش مصنوعی انجام گرفت، واژهٔ هوش در عبارات «عوامل عقلانی» و «عوامل هوشی»، معنا شد. «عامل» چیزی است که در یک محیط، مشاهده و عمل می‌کند؛ و اندازه‌گیری عملکرد، بیانگر مقدار موفقیت یک عامل است.

اگر یک «عامل» با توجه به تجربیات و دانش پیشین خود، بیشترین عملکرد را داشته باشد، می‌توان گفت که باهوش است.

چنین تعریفاتی، سعی در بدست آوردن مفهوم و ماهیت هوش دارند. آن‌ها این مزیت را دارد که بر خلاف آزمون تورینگ، برای ویژگی‌های انسانی ای که نمی‌خواهیم به‌عنوان هوش تلقی شوند، بکار روند، مانند «توانایی توهین کردن» و «وسوسهٔ دروغ گفتن». اما مشکل اساسی آن‌ها این است که نمی‌توانند، به‌طور منطقی، بین «چیزهایی که فکر می‌کند» و «چیزهایی که فکر نمی‌کنند» تفاوتی قائل شوند. با این تعریف حتی یک دما سنج هم دارای هوشی ابتدایی است.

استدلال‌هایی که یک ماشین می‌تواند هوش عمومی را نمایش دهد[ویرایش]

مغز می‌تواند شبیه‌سازی گردد[ویرایش]

بر اساس نوشتهٔ ماروین مینسکی: «اگر دستگاه عصبی از قوانین فیزیک و شیمی پیروی کند، که تمام شواهد هم حاکی از صحَّت این امر است، سپس ما باید بتوانیم که توسط یک دستگاه فیزیکی، عملکرد سیستم عصبی را بازسازی کنیم». این بحث برای نخستین بار در اوایل سال ۱۹۴۳ مطرح شد و توسط هانس موراوک در سال ۱۹۸۸ روشن‌تر شد؛ و هم‌اکنون ری کورزول پیش‌بینی می‌کند که توانایی کامپیوترها به حدی خواهد رسید که می‌توانند مغز کامل یک انسان را شبیه‌سازی کنند. اما برخی پژوهشگران هوش مصنوعی و حتی منتقدین این حوزه مانند هربرت دریفوس و جان سیرل با اینکه این طرح در تئوری تحقق یابد هم رأی نیستند. اما سیرل خاطر نشان کرد که در اصل، هر چیزی می‌تواند توسط کامپیوترها شبیه‌سازی گردد، و اگر شما بخواهید که به مفهوم شکست، دامنه بزنید، باید بدانید که تمام مراحل محاسبه خواهد شد. وی افزود: «آنچه ما می‌خواهیم بدانیم این است که چه چیزی ذهن آدمی را از دماسنج و جگر متمایز می‌کند!» هر مقاله‌ای که به نوعی با کپی‌برداری از مغز در ارتباط باشد، مقاله ایست که بر نادانی ما در خصوص چگونگی عملکرد هوش صحّه گذاشته‌است. اگر ما باید می‌دانستیم که مغز چگونه هوش مصنوعی را می‌سازد، هرگز نگران آن (هوش مصنوعی) نبودیم!

تفکر انسان، سَمبُل پردازش است[ویرایش]

مقالهٔ اصلی:Physical symbol system آلن نیول و هربرت سیمون در سال ۱۹۶۳symbol manipulation را به‌عنوان ماهیت اصلی هوش انسان و ماشین معرفی کردند. آن‌ها نوشتند:

Physical symbol system معنی لازم و کافی عملکرد هوش عمومی دارد.

این ادعا بسیار محکم است: چرا که معتقد است تفکر انسان نوعی symbol manipulation است (چرا که سامانهٔ سمبل برای هوش ضروری است) و آن ماشین می‌تواند باهوش باشد. (چرا که سامانهٔ سمبل برای هوش، کافی است) نسخهٔ دیگری از این نظریه را هربرت دریفوس فیلسوف مطرح کرد و آن را philosophical assumption نامید.

  • مغز می‌تواند بمانند دستگاهی تصور شود که اطلاعاتی را طبق قوانین از پیش تعیین شده بکار می‌گیرد.

معمولاً، این تفاوت، بین سمبل‌های سطح بالایی که در دنیای پیرامون هستند، مثل <سگ> و <دُم> و سمبل‌هایی که پیچیدگی بیشتری دارند و در ماشین‌هایی مثل سیستم شبکهٔ عصبی بکار گرفته می‌شوند، دیده می‌شود. پیشتر، پژوهشی در خصوص هوش مصنوعی توسط جان هاگلند، انجام گرفت که good old fashioned artificial intelligence یا GOFAI نامیده شد. طی این پژوهش سمبل‌های دسته بالا(high level symbols) مورد بررسی قرار گرفتند.

مبحثی علیه نماد پردازش[ویرایش]

این مباحث نشان می‌دهد که تفکر انسان شاملِhigh level symbol manipulation. نیست. این مباحث هوش مصنوعی را رد نمی‌کنند، تنها به چیزی بیش از نماد پردازش اشاره دارند.

لوکاس، پنروز و گودل[ویرایش]

در سال ۱۹۳۱ کورت گدل ثابت کرد: که همواره می‌توان عباراتی را خلق کرد، تا یک سیستم صوری (مانند: برنامهٔ هوش مصنوعی) قادر به اثبات آن نباشد. هر انسانی می‌تواند با کمی اندیشیدن به صحّت گفته‌های گودل برسد. این گفته توسط جان لوکاس فیلسوف نیز تأیید شده که منطق انسان همواره قوی تر از منطقِ ربات (ماشین)ها ست. وی نوشته‌است که به نظر من قضیهٔ گدل برای اثبات نقض ماشین گرایی کافی است، چرا که ذهن را نمی‌توان در قابل ماشین گنجاند. آقای راجر پنروز در کتاب خود به نام «ذهن تازهٔ امپراتور» که در سال ۱۹۸۹ منتشر گشت، به این موضوع بیشتر پرداخته‌است. در این کتاب وی می‌اندیشد که فرایند مکانیکی کوانتومی که در داخل تک تکِ رشته‌های عصبی انجام می‌شود، به انسان قابلیت ویژه‌ای می‌دهد که بر ماشین‌ها غلبه کند.

دریفوس: برتری مهارت‌های ناخودآگاه[ویرایش]

هربرت دریفوس معتقد است که هوش انسان و مهارتش ابتدا به غریزه ناخود آگاهش مربوط است تا conscious symbolic manipulation. و خاطر نشان کرد که این مهارت‌های ناخود آگاه، هرگز تحت سلطهٔ قوانین کلی در نخواهد آمد.

آقای ترنینگ روی بحث دری فوس در مقاله‌ای که تحت عنوان بررسی ِماشین آلات و هوش در سال ۱۹۵۰ مطرح شد تأمل بیشتری کرد. وی این مبحث را در دسته بندیِ arguments from informal behavior جای داد. وی در پاسخ گفت: هنگامی که ما، خودمان قوانینی را که رفتارهای پیچیده را رهبری می‌کنند نمی‌دانیم، دلیل نمی‌شود آن‌ها را نقض کنیم. (ندانستن ما دلیلی بر وجود نداشتن آن‌ها نیست) وی افزود: ما ابداً نمی‌توانیم خودمان را قانع کنیم که هیچگونه قانون کلی‌ای برای رفتارها وجود ندارد. تنها راهی که ما می‌توانیم برا ی یافتن چنین قوانینی پیش گیریم، مشاهدات علمی است و هنگامی که در یافتیم هیچگونه شرایطی تحت این عنوان وجود ندارد می‌توانیم بگوییم: «ما به اندازهٔ کافی جستجو کردیم و چنین قوانینی وجود ندارند».

راسل و نوروینگ اظهار داشتند، طی سال‌هایی که دری فوس مقالهٔ انتقادیش را منتشر کرد، فرایندی برای پی بردن به «قوانینی» که منطق ناخود آگاه را رهبری می‌کنند به وجود آمد. این جنبش‌های جایگزین شده در تحقیق‌های روبوتیک در واقع تلاشی است بر ای دستیابی مهارت‌های ناخود آگاهِ ما در درک و توجه. الگوی هوش محاسباتی، مانند رشته‌های عصبی، الگوریتم‌های پویا و غیره، غالباً به شبیه‌سازی استدلال و یادگیری ناخودآگاه رهنمود می‌شوند. تحقیقات در خصوص دانش عمومی روی بازسازی معلومات پیشین و مفهوم دانش، متمرکز شده‌است. در واقع تحقیق در خصوص هوش مصنوعی، از high level symbol manipulation و GOFAI جدا گشته و به مدلهایی تبدیل شده که گرایش بیشتری به capture کردن منطق ناخود آگاه ما دارند. مورخ و پژوهشگر هوش مصنوعی، آقای دانیل کرویر، نوشته‌است: «زمان صحت برخی از گفته‌های دری فوس را ثابت می‌کند». ؟

آیا یک ماشین می‌تواند دارای هوشیاری و حالات ذهنی باشد؟[ویرایش]

این یک سؤال فلسفی است، که بی ارتباط با مشکل ذهن‌های دیگر و مشکل اساسی هوشیاری نیست. این سؤال در حوزهٔ مطالعاتی نظریهٔ هوش مصنوعی قوی (strong AI) که توسط آقای جان سیرل ارائه شده می‌چرخد.

    • یک physical symbol system می‌تواند دارای ذهن و حالات ذهنی باشد.

آقای سیرل این نظریه را با چیزی که هوش مصنوعی ضعیف می‌نامد، (weak AI) متفاوت می‌داند.

    • یک physical symbol system می‌تواند عملکردی هوشمندانه داشته باشد.

وی با جدا کردن هوش مصنوعی قوی از ضعیف، ذهن خودش را روی مطلبی که فکر می‌کرد بحث‌برانگیز تر خواهد بود متمرکز کرد. وی گفت: حتی اگر فرض کنیم که برنامهٔ کامپیوتری‌ای ابداع کرده‌ایم که دقیقاً بمانند ذهن انسان عمل می‌کند، هنوز سوال‌های فلسفی دشوار وجود دارد که باید به آن‌ها پاسخ دهیم. هیچ‌یک از دو نظریهٔ آقای سیرل نتوانستند به این سؤال پاسخ دهند که: «آیا یک ماشین می‌تواند جلوه‌ای از یک هوش عمومی باشد؟» (مگر اینکه ثابت شود که آگاهی لازمهٔ به وجود آمدن هوش است) وی گفت، نمی‌خواهم این‌گونه برداشت کنم که هیچ رمز و رازی در بارهٔ آگاهی و هوشیاری وجود ندارد. اما در عین حال فکر نمی‌کنم که لزوماً این معماها باید پیش از آنکه به سؤال {آیا ماشین‌ها می‌توانند فکر کنند} پاسخ دهیم، حل شوند. راسل و نوروینگ معتقدند که بیشتر پژوهشگران حوزهٔ هوشِ مصنوعی، فرضیهٔ هوش مصنوعی ضعیف را بدیهی فرض می‌کنند و (انگار) اصلاً فرضیهٔ هوش مصنوعی قوی برایشان جذابیتی ندارد.

پیش از آنکه پاسخی به این سؤال بدهیم، باید بیشتر به معنا و مفهوم واژه‌های minds- mental states-consciousness بپردازیم.

هوشیاری، ذهن، حالات ذهنی و معنا[ویرایش]

واژه‌های «ذهن» و «هوشیاری» در جوامع گوناگون، معانی متفاوتی دارند. به‌عنوان مثال، برخی از متفکرینِnew age از واژهٔ «هوشیاری» برای وصف چیزهایی شبیه به «élan vital» برگسون، ماده‌ای نامرئی و حاوی انرژی که به زندگی و بخصوص ذهن رخنه می‌کند، بهره می‌جویند. نویسندگان داستان‌های علمی تخیلی، از واژه برای توصیف ویژگی ذاتی ی مان که ما را به انسان مبدل کرده‌است، استفاده می‌کنند. ماشینی که آگاهی دارد یا هوشیار است، به عنوان یک شخصیت کاملاً انسان نمامفهوم معنایی ظاهر می‌شود، با خصوصیاتی نظیر هوش، میل، آرزو، امید، بینش، غرور و بسیاری دیگر… این نویسندگان همچنین از واژه‌های درک، معرفت و دانایی، خود آگاهی و روح، به منظور توصیف این ویژگی‌های اصلی انسانی استفاده می‌کنند. برای دیگرِ افراد واژه‌های «ذهن» و «هوشیاری (آگاهی)» وابستهٔ «روح» تلقی می‌شوند. برای فیلسوف‌ها و دانشمندان علم عصب‌شناسی و علوم شناختی، این دو واژه به مفهومی، دقیق‌تر و دنیوی‌تر دارند. مفهومی ملموس و روزمره تر دارند. مانند فکر کردن، درک کردن، یک رؤیا، یک خیال یا یک برنامه (نقشه)، و چیزی که ما می‌دانیم و درک می‌کنیم. کار دشواری نیست که ما مفهوم دقیق و قابل درکی از آگاهی ارائه کنیم. چیزی که مبهم و اسرارآمیز است، خود آن نیست، بلکه چگونگی آن است.

فلاسفه این را مشکل اصلی آگاهی (هوشیاری) می‌دانند. این نسخهٔ نهایی مشکلات روتین (کلاسیک) فلسفهٔ ذهن است که مسئله ذهن و بدن نامیده می‌شود. مشکل مربوط، مشکلات معنایی یا مفهومی است که فلاسفه آن را intentionality می‌نامند. چه رابطه‌ای میان تفکر ما، (مثل الگوهای عصبی) و چیزی که ما بدان می‌اندیشیم، (مانند موقعیت‌های پیرامونمان) وجود دارد؟ سومین مورد، مشکل تجربه (یا پدیدارشناسی) است. اگر دو فرد، یک چیز را ببینند آیا نسبت یه آن به یک شکل می‌نگرد. (هر دو ی آن‌ها احساسی مشابه نسبت به آن دارند؟) یا اینکه چیزی در ذهنشان وجود دارد (بنام qualia) که در همهٔ اشخاص متفاوت است؟ Neurobiologists معتقدند که هنگامی که ما شروع به شناختن رابطهٔ عصبیِ هوشیاری کنیم، تمامی این مشکلات حل خواهند شد. ماشینی حقیقی که در مغز ما وجود دارد و ذهن، تجربه و فهم را خلق می‌کند. حتی تندترین منتقدین حوزه هوش مصنوعی نیز، بر این امر واقفند که مغز، تنها ماشینی است که هوشیاری (آگاهی) و هوش را در نتیجهٔ فرایندهای فیزیکی می‌سازد. سؤال دشوار فلسفی این است که: آیا یک برنامهٔ کامپیوتری که توسط ماشین دیجیتالی با ادغام ارقام دو دویی صفر و یک، اجرا می‌شود، می‌تواند توانایی نورون‌ها (رشته‌های عصبی) را برای خلق ذهن، و در نهایت تجربهٔ هوشیاری دوبرابر کنند؟

آیا تفکر نوعی محاسبه است؟[ویرایش]

این مقاله از اهمیت ابتدایی ای برای دانشمندان رفتار شناختی برخوردار است که ذات تفکر بشر و حل مشکلاتش را مطالعه کرده‌اند. تئوریِ محاسباتی ذهن، یا computationalism، ادعا می‌کند که رابطهٔ بین ذهن و جسم، همانند رابطهٔ بین برنامهٔ اجرایی و کامپیوتر است. این ایده ریشه‌ای فلسفی دارد. هابز می‌گوید: استدلال چیزی بیشتر از حساب کردن نیست. لایبنیتز که تمامی تلاشش را برای خلق محاسبات منطقی همهٔ ایده‌های انسان بکار گرفت. هیوم کسی که می‌اندیشید، درک می‌تواند به اجزای ریزی تقسیم‌بندی شود؛ و حتی کانت که تمامی تجربه‌ها را کنترل و با قوانین رسمی، تحلیل کرد. نسخهٔ نهایی، با همکاری دو فیلسوف، هیلاری پاتنام و جری فودور تهیه شد. این سؤال در اصل، زاییدهٔ سوال‌های پیشین است. اگر مغز انسان نوعی کامپیوتر باشد، آنگاه کامپیوترها می‌توانند هم باهوش باشند و هم آگاه که قادر خواهند بود به سوالات فلسفی و عملی هوش مصنوعی پاسخ دهند. براساس سوالات عملی هوش مصنوعی، نظیر (آیا یک ماشین می‌تواند جلوه‌ای از هوش عمومی باشد؟) برخی نسخ computationalism اعلام کردند (همانطوری‌که هوبز نوشته:):

    • استدلال چیزی جز محاسبه نیست

به بیان دیگر، هوش ما، برگرفته از نوعی محاسبه‌است، شبیه به حسابگری (arithmetic). این فرضیه‌ای که در بالا مطرح شد (همان: physical symbol system) نشان می‌دهد که تولید هوش مصنوعی غیرممکن نیست. در خصوص سؤال فلسفی‌ای که در مورد هوش مصنوعی مطرح شد، (آیا یک ماشین می‌تواند، ذهن، حس و آگاهی داشته باشد)، اغلب نسخ در رابطه با محاسبه گرایی(computationalism) همانطوری‌که استیون هارناد (Steven Harnad) گفته:

    • حالات ذهنی، تنها اجرای درست برنامه‌های کامپیوتری است.

دیگر سوالات مربوطه[ویرایش]

آلن تورینگ گفت: مباحث بی‌شماری با این عناوین وجود دارند: «یک ماشین هرگز فلان کار را نمی‌کند». و این «فلان»، می‌تواند هر چیزی باشد! مانند:

مهربان بودن، ابتکار داشتن، زیبا، دوستانه و خوش ذوق بودن، شوخ‌طبع بودن، تشخیص درست از نادرست، اشتباه کردن، عاشق شدن، لذت بردن از توت‌فرنگی و خامه، کسی را شیفتهٔ خود کردن، از تجربه‌ها پند گرفتن، از واژه‌ها به‌درستی استفاده کردن، از افکار خویش بهره گرفتن، بمانند انسان رفتارهای گوناگونی داشتن یا اینکه، دست به کارهایی کاملاً تازه بزند.[۱۰]

مهربان بودن، ابتکار داشتن، زیبا، دوستانه و خوش ذوق بودن، شوخ‌طبع بودن، تشخیص درست از نادرست، اشتباه کردن، عاشق شدن، لذت بردن از توت فرنگی و خامه، کسی را شیفتهٔ خود کردن، از تجربه‌ها پند گرفتن، از واژه‌ها به‌درستی استفاده کردن، از افکار خویش بهره گرفتن، بمانند انسان رفتارهای گوناگونی داشتن یا اینکه، دست به کارهایی کاملاً تازه بزند.

«تورینگ» معتقد است که این استدلال‌ها اغلب بر اساس فرضیاتی ساده، مبنی بر تطبیق‌پذیری ماشین‌ها هستند یا فرم دیگری از مبحث هوشیاری. نوشتن برنامه‌ای که رفتارهای فوق را ارائه دهد، تأثیر چندانی نخواهد داشت. تمام این مباحث نسبت به قضیهٔ اصلی هوش مصنوعی، tangential هستند، مگر اینکه بتوانند ثابت کنند که یکی از این ویژگی‌ها برای هوش عمومی ضروری است.

آیا یک ماشین می‌تواند احساس داشته باشد؟[ویرایش]

هنس مراوک می‌گوید: «به عقیدهٔ من ربات‌ها در کل در خصوص اینکه مردمان خوبی باشند کاملاً احساسی برخورد می‌کنند». و احساسات را در راستای اعمالی که انجام می‌دهند تعریف می‌کنند. ترس سرچشمهٔ فوریت است. همدلی یک عنصر مهم در تعامل میان انسان و کامپیوتر است. به گفتهٔ وی ربات‌ها سعی می‌کنند که در ظاهری کاملاً عاری از خویشتن بینی، از شما در خواست کنند چرا که این عمل تأثیر مثبتی روی آن‌ها می‌گذارد. شما می‌توانید از این عمل آن‌ها به عنوان محبت (عشق) یاد کنید. دانیل کرویر می‌نویسد: «مراوک معتقد است که احساسات تنها ابزاری برای به چالش کشیدن رفتار به سوی بقای یک گونه باشد» این سؤال که آیا یک ماشین قادر به درک احساسات هست یا تنها این‌گونه می‌نمایاند، یک سؤال فلسفی است.

آیا یک ماشین می‌تواند از خود آگاه باشد؟[ویرایش]

خود آگاهی همان‌طور که در بالا اشاره شد، گاهی اوقات توسط نویسندگان داستان‌های علمی تخیلی تحت عنوان یک اسم برای عمده دارایی یک انسان که شخصیت را کاملاً به یک انسان مبدل می‌کند، بکار گرفته می‌شود. تورینگ انسان را از دیگر دارایی‌هایش تهی کرد و سؤال را به بک جمله تبدیل کرد: «آیا یک ماشین می‌تواند از افکارش تبعیت کند؟» آیا می‌تواند به خودش فکر کند؟ کاملاً واضح و روشن است که در این رابطه می‌توان برنامه‌ای نوشت که ماشین، گزارش‌هایی را از درون خویش بدهد. (مانند debugger).

آیا یک ماشین می‌تواند خلاّق یا مبتکر باشد؟[ویرایش]

تورینگ سؤالی مطرح کرد و آن سؤال این بود که آیا یک ماشین می‌تواند کاری کند که برای ما تازگی داشته باشد؟ (می‌تواند ما را شگفت زده کند؟) و روی آن بحث کرد، پاسخ مثبت است؛ و هر برنامه‌نویسی می‌تواند آن را تصدیق کند. وی افزود، کامپیوترها با داشتن ظرفیت بالای حافظه‌ای، قادر خوهند بود بی‌شمار رفتار مختلف انجام دهند. احتمال این قضیه، هرچند اندک، وجود دارد که کامپوترها قادر باشند با ترکیب چند ایده، ایده‌ای نو بسازند. به عنوان مثال، Automated Mathematician داگلاس لناتس، چند ایده را برای پی بردن به حقیقت تازه علم ریاضی با هم ترکیب کرد.

آیا یک ماشین می‌تواند روحی داشته باشد؟[ویرایش]

در نهایت افرادی که به وجود روح عقیده دارند، می‌توانند بر سر این موضوع بحث کنند که:

  • تفکر یکی از قابلیتهای روح جاودان بشر است.
  • هرچیزی که در زاویه فکر بشری قرار بگیرد می‌تواند امکان‌پذیر باشد

آلن تورینگ این را «هدفی الهی» نامید و نوشت: برای ساختن چنین ماشین‌هایی، ما نباید به قدرت او (پروردگار) در ساختن روح بی حرمتی کنیم.

جستارهای وابسته[ویرایش]

پانویس[ویرایش]

    1. Blackmore, Susan (2005), Consciousness: A Very Short Introduction, Oxford University Press
    2. Brooks, Rodney (1990), "Elephants Don't Play Chess" (PDF), Robotics and Autonomous Systems 6: 3-15, http://people.csail.mit.edu/brooks/papers/elephants.pdf, retrieved on ۳۰ اوت ۲۰۰۷
    3. Cole, David (Fall 2004), "The Chinese Room Argument", in Zalta, Edward N. , The Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/archives/fall2004/entries/chinese-room/ .
    4. Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks, ISBN 0-465-02997-3
    5. Dennett, Daniel (1991), Consciousness Explained, The Penguin Press, ISBN 0-7139-9037-6
    6. Dreyfus, Hubert (1972), What Computers Can't Do, New York: MIT Press, ISBN 0-06-011082-1
    7. Dreyfus, Hubert (1979), What Computers Still Can't Do, New York: MIT Press.
    8. Dreyfus, Hubert; Dreyfus, Stuart (1986), Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer, Oxford, UK: Blackwell
    9. Fearn, Nicholas (2007), The Latest Answers to the Oldest Questions: A Philosophical Adventure with the World's Greatest Thinkers, New York: Grove Press
    10. Gladwell, Malcolm (2005), Blink: The Power of Thinking Without Thinking, Boston: Little, Brown, ISBN 0-316-17232-4.
    11. Harnad, Stevan (2001), "What's Wrong and Right About Searle's Chinese Room Argument?", in Bishop, M. ; Preston, J. , Essays on Searle's Chinese Room Argument, Oxford University Press, http://cogprints.org/4023/1/searlbook.htm
    12. Hobbes (1651), Leviathan.
    13. Hofstadter, Douglas (1979), Gödel, Escher, Bach: an Eternal Golden Braid.
    14. Horst, Steven (Fall 2005), "The Computational Theory of Mind", in Zalta, Edward N. , The Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/archives/fall2005/entries/computational-mind/ .
    15. Kurzweil, Ray (2005), The Singularity is Near, New York: Viking Press, ISBN 0-670-03384-7.
    16. Lucas, John (1961), "Minds, Machines and Gödel", in Anderson, A.R. , Minds and Machines, http://users.ox.ac.uk/~jrlucas/Godel/mmg.html .
    17. McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955), A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, https://web.archive.org/web/20070826230310/http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html .
    18. McDermott, Drew (۱۴ مه ۱۹۹۷), "How Intelligent is Deep Blue", New York Times, https://web.archive.org/web/20071004234354/http://www.psych.utoronto.ca/~reingold/courses/ai/cache/mcdermott.html
    19. Moravec, Hans (1988), Mind Children, Harvard University Press
    20. Newell, Allen; Simon, H. A. (1963), "GPS: A Program that Simulates Human Thought", in Feigenbaum, E.A. ; Feldman, J. , Computers and Thought, McGraw-Hill
    21. Newell, Allen; Simon, H. A. (1976), "Computer Science as Empirical Inquiry: Symbols and Search", Communications of the ACM, 19, https://web.archive.org/web/20081007162810/http://www.rci.rutgers.edu/~cfs/472_html/AI_SEARCH/PSS/PSSH4.html
    22. Russell, Stuart J. ; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, NJ: Prentice Hall, ISBN 0-13-790395-2, http://aima.cs.berkeley.edu/
    23. Penrose, Roger (1989), The Emperor's New Mind: Concerning Computers, Minds, and The Laws of Physics, Oxford University Press, ISBN 0-14-014534-6
    24. Searle, John (1980), "Minds, Brains and Programs", Behavioral and Brain Sciences 3 (3): 417-457, https://web.archive.org/web/20000823030455/http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html
    25. Searle, John (1992), The Rediscovery of the the Mind, Cambridge, Massachusetts: M.I.T. Press
    26. Searle, John (1999), Mind, language and society, New York, NY: Basic Books, ISBN 0-465-04521-9, OCLC 231867665 43689264
    27. Turing, Alan (اکتبر ۱۹۵۰), "Computing Machinery and Intelligence", Mind LIX (236): 433–460, ISSN 0026-4423, http://loebner.net/Prizef/TuringArticle.html, retrieved on ۱۸ اوت ۲۰۰۸

منابع[ویرایش]

  1. McCarthy, John. "The Philosophy of AI and the AI of Philosophy". jmc.stanford.edu. Archived from the original on 23 October 2018. Retrieved 2018-09-18.
  2. ۲٫۰ ۲٫۱ Bringsjord, Selmer; Govindarajulu, Naveen Sundar (2018), Zalta, Edward N., ed., "Artificial Intelligence", The Stanford Encyclopedia of Philosophy (Fall 2018 ed.), Metaphysics Research Lab, Stanford University, archived from the original on 9 November 2019, retrieved 2018-09-18
  3. This is a paraphrase of the essential point of the Turing test. Turing 1950, Haugeland 1985, pp. 6–9, Crevier 1993, p. 24, Russell & Norvig 2003, pp. 2–3 and 948
  4. McCarthy et al. 1955. This assertion was printed in the program for the Dartmouth Conference of 1956, widely considered the "birth of AI."also Crevier 1993, p. 28
  5. Newell & Simon 1976 and Russell & Norvig 2003, p. 18
  6. This version is from Searle (1999), and is also quoted in Dennett 1991, p. 435. Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." (Searle 1980, p. 1). Strong AI is defined similarly by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."
  7. Hobbes 1651, chpt. 5
  8. Turing, Alan Mathison. Who's Who (online Oxford University Press ed.). A & C Black, an imprint of Bloomsbury Publishing plc. (نیازمند آبونمان)
  9. Turing 1950 under "The Argument from Consciousness"
  10. Turing 1950 under "(5) Arguments from Various Disabilities"

پیوند به بیرون[ویرایش]

Artificial intelligence has close connections with philosophy because both share several concepts and these include intelligence, action, consciousness, epistemology, and even free will.[1] Furthermore, the technology is concerned with the creation of artificial animals or artificial people (or, at least, artificial creatures) so the discipline is of considerable interest to philosophers.[2] These factors contributed to the emergence of the philosophy of artificial intelligence. Some scholars argue that the AI community's dismissal of philosophy is detrimental.[citation needed]

The philosophy of artificial intelligence attempts to answer such questions as follows:[3]

  • Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
  • Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?
  • Can a machine have a mind, mental states, and consciousness in the same way that a human being can? Can it feel how things are?

Questions like these reflect the divergent interests of AI researchers, linguists, cognitive scientists and philosophers respectively. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion.

Important propositions in the philosophy of AI include:

  • Turing's "polite convention": If a machine behaves as intelligently as a human being, then it is as intelligent as a human being.[4]
  • The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it."[5]
  • Newell and Simon's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."[6]
  • Searle's strong AI hypothesis: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[7]
  • Hobbes' mechanism: "For 'reason' ... is nothing but 'reckoning,' that is adding and subtracting, of the consequences of general names agreed upon for the 'marking' and 'signifying' of our thoughts..."[8]

Can a machine display general intelligence?

Is it possible to create a machine that can solve all the problems humans solve using their intelligence? This question defines the scope of what machines will be able to do in the future and guides the direction of AI research. It only concerns the behavior of machines and ignores the issues of interest to psychologists, cognitive scientists and philosophers; to answer this question, it does not matter whether a machine is really thinking (as a person thinks) or is just acting like it is thinking.[9]

The basic position of most AI researchers is summed up in this statement, which appeared in the proposal for the Dartmouth workshop of 1956:

  • Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.[5]

Arguments against the basic premise must show that building a working AI system is impossible, because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for thinking and yet cannot be duplicated by a machine (or by the methods of current AI research). Arguments in favor of the basic premise must show that such a system is possible.

The first step to answering the question is to clearly define "intelligence".

Intelligence

The "standard interpretation" of the Turing test.[10]

Turing test

Alan Turing[11] reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer any question put to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human.[4] Turing notes that no one (except philosophers) ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks".[12] Turing's test extends this polite convention to machines:

  • If a machine acts as intelligently as a human being, then it is as intelligent as a human being.

One criticism of the Turing test is that it is explicitly anthropomorphic[citation needed]. If our ultimate goal is to create machines that are more intelligent than people, why should we insist that our machines must closely resemble people?[This quote needs a citation] Russell and Norvig write that "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons'".[13]

Intelligent agent definition

Simple reflex agent

Recent A.I. research defines intelligence in terms of intelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent.[14]

  • If an agent acts so as to maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent.[15]

Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they do not also test for human traits that we[who?] may not want to consider intelligent, like the ability to be insulted or the temptation to lie[dubious ]. They have the disadvantage that they fail to make the commonsense[when defined as?] differentiation between "things that think" and "things that do not". By this definition, even a thermostat has a rudimentary intelligence.[16]

Arguments that a machine can display general intelligence

The brain can be simulated

An MRI scan of a normal adult human brain

Hubert Dreyfus describes this argument as claiming that "if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then .... we ... ought to be able to reproduce the behavior of the nervous system with some physical device".[17] This argument, first introduced as early as 1943[18] and vividly described by Hans Moravec in 1988,[19] is now associated with futurist Ray Kurzweil, who estimates that computer power will be sufficient for a complete brain simulation by the year 2029.[20] A non-real-time simulation of a thalamocortical model that has the size of the human brain (1011 neurons) was performed in 2005[21] and it took 50 days to simulate 1 second of brain dynamics on a cluster of 27 processors.

Few[quantify] disagree that a brain simulation is possible in theory,[citation needed][according to whom?] even critics of AI such as Hubert Dreyfus and John Searle.[22] However, Searle points out that, in principle, anything can be simulated by a computer; thus, bringing the definition to its breaking point leads to the conclusion that any process at all can technically be considered "computation". "What we wanted to know is what distinguishes the mind from thermostats and livers," he writes.[23] Thus, merely mimicking the functioning of a brain would in itself be an admission of ignorance regarding intelligence and the nature of the mind[citation needed].

Human thinking is symbol processing

In 1963, Allen Newell and Herbert A. Simon proposed that "symbol manipulation" was the essence of both human and machine intelligence. They wrote:

  • A physical symbol system has the necessary and sufficient means of general intelligent action.[6]

This claim is very strong: it implies both that human thinking is a kind of symbol manipulation (because a symbol system is necessary for intelligence) and that machines can be intelligent (because a symbol system is sufficient for intelligence).[24] Another version of this position was described by philosopher Hubert Dreyfus, who called it "the psychological assumption":

  • The mind can be viewed as a device operating on bits of information according to formal rules.[25]

A distinction is usually made[by whom?] between the kind of high level symbols that directly correspond with objects in the world, such as <dog> and <tail> and the more complex "symbols" that are present in a machine like a neural network. Early research into AI, called "good old fashioned artificial intelligence" (GOFAI) by John Haugeland, focused on these kind of high level symbols.[26]

Arguments against symbol processing

These arguments show that human thinking does not consist (solely) of high level symbol manipulation. They do not show that artificial intelligence is impossible, only that more than symbol processing is required.

Gödelian anti-mechanist arguments

In 1931, Kurt Gödel proved with an incompleteness theorem that it is always possible to construct a "Gödel statement" that a given consistent formal system of logic (such as a high-level symbol manipulation program) could not prove. Despite being a true statement, the constructed Gödel statement is unprovable in the given system. (The truth of the constructed Gödel statement is contingent on the consistency of the given system; applying the same process to a subtly inconsistent system will appear to succeed, but will actually yield a false "Gödel statement" instead.)[citation needed] More speculatively, Gödel conjectured that the human mind can correctly eventually determine the truth or falsity of any well-grounded mathematical statement (including any possible Gödel statement), and that therefore the human mind's power is not reducible to a mechanism.[27] Philosopher John Lucas (since 1961) and Roger Penrose (since 1989) have championed this philosophical anti-mechanist argument.[28] Gödelian anti-mechanist arguments tend to rely on the innocuous-seeming claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent (completely free of error) and believes fully in its own consistency (and can make all logical inferences that follow from its own consistency, including belief in its Gödel statement)[citation needed]. This is provably impossible for a Turing machine[clarification needed] (and, by an informal extension, any known type of mechanical computer) to do; therefore, the Gödelian concludes that human reasoning is too powerful to be captured in a machine[dubious ].

However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H (otherwise H is provably inconsistent); and that Gödel's theorems do not lead to any valid argument that humans have mathematical reasoning capabilities beyond what a machine could ever duplicate.[29][30][31] This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence: "any attempt to utilize (Gödel's incompleteness results) to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis."[32]

More pragmatically, Russell and Norvig note that Gödel's argument only applies to what can theoretically be proved, given an infinite amount of memory and time. In practice, real machines (including humans) have finite resources and will have difficulty proving many theorems. It is not necessary to prove everything in order to be intelligent[when defined as?].[33]

Less formally, Douglas Hofstadter, in his Pulitzer prize winning book Gödel, Escher, Bach: An Eternal Golden Braid, states that these "Gödel-statements" always refer to the system itself, drawing an analogy to the way the Epimenides paradox uses statements that refer to themselves, such as "this statement is false" or "I am lying".[34] But, of course, the Epimenides paradox applies to anything that makes statements, whether they are machines or humans, even Lucas himself. Consider:

  • Lucas can't assert the truth of this statement.[35]

This statement is true but cannot be asserted by Lucas. This shows that Lucas himself is subject to the same limits that he describes for machines, as are all people, and so Lucas's argument is pointless.[36]

After concluding that human reasoning is non-computable, Penrose went on to controversially speculate that some kind of hypothetical non-computable processes involving the collapse of quantum mechanical states give humans a special advantage over existing computers. Existing quantum computers are only capable of reducing the complexity of Turing computable tasks and are still restricted to tasks within the scope of Turing machines.[citation needed][clarification needed]. By Penrose and Lucas's arguments, existing quantum computers are not sufficient[citation needed][clarification needed][why?], so Penrose seeks for some other process involving new physics, for instance quantum gravity which might manifest new physics at the scale of the Planck mass via spontaneous quantum collapse of the wave function. These states, he suggested, occur both within neurons and also spanning more than one neuron.[37] However, other scientists point out that there is no plausible organic mechanism in the brain for harnessing any sort of quantum computation, and furthermore that the timescale of quantum decoherence seems too fast to influence neuron firing.[38]

Dreyfus: the primacy of implicit skills

Hubert Dreyfus argued that human intelligence and expertise depended primarily on implicit skill rather than explicit symbolic manipulation, and argued that these skills would never be captured in formal rules.[39]

Dreyfus's argument had been anticipated by Turing in his 1950 paper Computing machinery and intelligence, where he had classified this as the "argument from the informality of behavior."[40] Turing argued in response that, just because we do not know the rules that govern a complex behavior, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'"[41]

Russell and Norvig point out that, in the years since Dreyfus published his critique, progress has been made towards discovering the "rules" that govern unconscious reasoning.[42] The situated movement in robotics research attempts to capture our unconscious skills at perception and attention.[43] Computational intelligence paradigms, such as neural nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning and learning. Statistical approaches to AI can make predictions which approach the accuracy of human intuitive guesses. Research into commonsense knowledge has focused on reproducing the "background" or context of knowledge. In fact, AI research in general has moved away from high level symbol manipulation or "GOFAI", towards new models that are intended to capture more of our unconscious reasoning[according to whom?]. Historian and AI researcher Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."[44]

Can a machine have a mind, consciousness, and mental states?

This is a philosophical question, related to the problem of other minds and the hard problem of consciousness. The question revolves around a position defined by John Searle as "strong AI":

  • A physical symbol system can have a mind and mental states.[7]

Searle distinguished this position from what he called "weak AI":

  • A physical symbol system can act intelligently.[7]

Searle introduced the terms to isolate strong AI from weak AI so he could focus on what he thought was the more interesting and debatable issue. He argued that even if we assume that we had a computer program that acted exactly like a human mind, there would still be a difficult philosophical question that needed to be answered.[7]

Neither of Searle's two positions are of great concern to AI research, since they do not directly answer the question "can a machine display general intelligence?" (unless it can also be shown that consciousness is necessary for intelligence). Turing wrote "I do not wish to give the impression that I think there is no mystery about consciousness… [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]."[45] Russell and Norvig agree: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."[46]

There are a few researchers who believe that consciousness is an essential element in intelligence, such as Igor Aleksander, Stan Franklin, Ron Sun, and Pentti Haikonen, although their definition of "consciousness" strays very close to "intelligence." (See artificial consciousness.)

Before we can answer this question, we must be clear what we mean by "minds", "mental states" and "consciousness".

Consciousness, minds, mental states, meaning

The words "mind" and "consciousness" are used by different communities in different ways. Some new age thinkers, for example, use the word "consciousness" to describe something similar to Bergson's "élan vital": an invisible, energetic fluid that permeates life and especially the mind. Science fiction writers use the word to describe some essential property that makes us human: a machine or alien that is "conscious" will be presented as a fully human character, with intelligence, desires, will, insight, pride and so on. (Science fiction writers also use the words "sentience", "sapience," "self-awareness" or "ghost" - as in the Ghost in the Shell manga and anime series - to describe this essential human property). For others[who?], the words "mind" or "consciousness" are used as a kind of secular synonym for the soul.

For philosophers, neuroscientists and cognitive scientists, the words are used in a way that is both more precise and more mundane: they refer to the familiar, everyday experience of having a "thought in your head", like a perception, a dream, an intention or a plan, and to the way we know something, or mean something or understand something[citation needed]. "It's not hard to give a commonsense definition of consciousness" observes philosopher John Searle.[47] What is mysterious and fascinating is not so much what it is but how it is: how does a lump of fatty tissue and electricity give rise to this (familiar) experience of perceiving, meaning or thinking?

Philosophers call this the hard problem of consciousness. It is the latest version of a classic problem in the philosophy of mind called the "mind-body problem."[48] A related problem is the problem of meaning or understanding (which philosophers call "intentionality"): what is the connection between our thoughts and what we are thinking about (i.e. objects and situations out in the world)? A third issue is the problem of experience (or "phenomenology"): If two people see the same thing, do they have the same experience? Or are there things "inside their head" (called "qualia") that can be different from person to person?[49]

Neurobiologists believe all these problems will be solved as we begin to identify the neural correlates of consciousness: the actual relationship between the machinery in our heads and its collective properties; such as the mind, experience and understanding. Some of the harshest critics of artificial intelligence agree that the brain is just a machine, and that consciousness and intelligence are the result of physical processes in the brain.[50] The difficult philosophical question is this: can a computer program, running on a digital machine that shuffles the binary digits of zero and one, duplicate the ability of the neurons to create minds, with mental states (like understanding or perceiving), and ultimately, the experience of consciousness?

Arguments that a computer cannot have a mind and mental states

Searle's Chinese room

John Searle asks us to consider a thought experiment: suppose we have written a computer program that passes the Turing test and demonstrates general intelligent action. Suppose, specifically that the program can converse in fluent Chinese. Write the program on 3x5 cards and give them to an ordinary person who does not speak Chinese. Lock the person into a room and have him follow the instructions on the cards. He will copy out Chinese characters and pass them in and out of the room through a slot. From the outside, it will appear that the Chinese room contains a fully intelligent person who speaks Chinese. The question is this: is there anyone (or anything) in the room that understands Chinese? That is, is there anything that has the mental state of understanding, or which has conscious awareness of what is being discussed in Chinese? The man is clearly not aware. The room cannot be aware. The cards certainly aren't aware. Searle concludes that the Chinese room, or any other physical symbol system, cannot have a mind.[51]

Searle goes on to argue that actual mental states and consciousness require (yet to be described) "actual physical-chemical properties of actual human brains."[52] He argues there are special "causal properties" of brains and neurons that gives rise to minds: in his words "brains cause minds."[53]

Related arguments: Leibniz' mill, Davis's telephone exchange, Block's Chinese nation and Blockhead

Gottfried Leibniz made essentially the same argument as Searle in 1714, using the thought experiment of expanding the brain until it was the size of a mill.[54] In 1974, Lawrence Davis imagined duplicating the brain using telephone lines and offices staffed by people, and in 1978 Ned Block envisioned the entire population of China involved in such a brain simulation. This thought experiment is called "the Chinese Nation" or "the Chinese Gym".[55] Ned Block also proposed his Blockhead argument, which is a version of the Chinese room in which the program has been re-factored into a simple set of rules of the form "see this, do that", removing all mystery from the program.

Responses to the Chinese room

Responses to the Chinese room emphasize several different points.

  • The systems reply and the virtual mind reply:[56] This reply argues that the system, including the man, the program, the room, and the cards, is what understands Chinese. Searle claims that the man in the room is the only thing which could possibly "have a mind" or "understand", but others disagree, arguing that it is possible for there to be two minds in the same physical place, similar to the way a computer can simultaneously "be" two machines at once: one physical (like a Macintosh) and one "virtual" (like a word processor).
  • Speed, power and complexity replies:[57] Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.
  • Robot reply:[58] To truly understand, some believe the Chinese Room needs eyes and hands. Hans Moravec writes: 'If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."[59]
  • Brain simulator reply:[60] What if the program simulates the sequence of nerve firings at the synapses of an actual brain of an actual Chinese speaker? The man in the room would be simulating an actual brain. This is a variation on the "systems reply" that appears more plausible because "the system" now clearly operates like a human brain, which strengthens the intuition that there is something besides the man in the room that could understand Chinese.
  • Other minds reply and the epiphenomena reply:[61] Several people have noted that Searle's argument is just a version of the problem of other minds, applied to machines. Since it is difficult to decide if people are "actually" thinking, we should not be surprised that it is difficult to answer the same question about machines.
A related question is whether "consciousness" (as Searle understands it) exists. Searle argues that the experience of consciousness can't be detected by examining the behavior of a machine, a human being or any other animal. Daniel Dennett points out that natural selection cannot preserve a feature of an animal that has no effect on the behavior of the animal, and thus consciousness (as Searle understands it) can't be produced by natural selection. Therefore either natural selection did not produce consciousness, or "strong AI" is correct in that consciousness can be detected by suitably designed Turing test.

Is thinking a kind of computation?

The computational theory of mind or "computationalism" claims that the relationship between mind and brain is similar (if not identical) to the relationship between a running program and a computer. The idea has philosophical roots in Hobbes (who claimed reasoning was "nothing more than reckoning"), Leibniz (who attempted to create a logical calculus of all human ideas), Hume (who thought perception could be reduced to "atomic impressions") and even Kant (who analyzed all experience as controlled by formal rules).[62] The latest version is associated with philosophers Hilary Putnam and Jerry Fodor.[63]

This question bears on our earlier questions: if the human brain is a kind of computer then computers can be both intelligent and conscious, answering both the practical and philosophical questions of AI. In terms of the practical question of AI ("Can a machine display general intelligence?"), some versions of computationalism make the claim that (as Hobbes wrote):

  • Reasoning is nothing but reckoning[8]

In other words, our intelligence derives from a form of calculation, similar to arithmetic. This is the physical symbol system hypothesis discussed above, and it implies that artificial intelligence is possible. In terms of the philosophical question of AI ("Can a machine have mind, mental states and consciousness?"), most versions of computationalism claim that (as Stevan Harnad characterizes it):

  • Mental states are just implementations of (the right) computer programs[64]

This is John Searle's "strong AI" discussed above, and it is the real target of the Chinese room argument (according to Harnad).[64]

Other related questions

Alan Turing noted that there are many arguments of the form "a machine will never do X", where X can be many things, such as:

Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.[65]

Turing argues that these objections are often based on naive assumptions about the versatility of machines or are "disguised forms of the argument from consciousness". Writing a program that exhibits one of these behaviors "will not make much of an impression."[65] All of these arguments are tangential to the basic premise of AI, unless it can be shown that one of these traits is essential for general intelligence.

Can a machine have emotions?

If "emotions" are defined only in terms of their effect on behavior or on how they function inside an organism, then emotions can be viewed as a mechanism that an intelligent agent uses to maximize the utility of its actions. Given this definition of emotion, Hans Moravec believes that "robots in general will be quite emotional about being nice people".[66] Fear is a source of urgency. Empathy is a necessary component of good human computer interaction. He says robots "will try to please you in an apparently selfless manner because it will get a thrill out of this positive reinforcement. You can interpret this as a kind of love."[66] Daniel Crevier writes "Moravec's point is that emotions are just devices for channeling behavior in a direction beneficial to the survival of one's species."[67]

Can a machine be self-aware?

"Self awareness", as noted above, is sometimes used by science fiction writers as a name for the essential human property that makes a character fully human. Turing strips away all other properties of human beings and reduces the question to "can a machine be the subject of its own thought?" Can it think about itself? Viewed in this way, a program can be written that can report on its own internal states, such as a debugger.[65] Though arguably self-awareness often presumes a bit more capability; a machine that can ascribe meaning in some way to not only its own state but in general postulating questions without solid answers: the contextual nature of its existence now; how it compares to past states or plans for the future, the limits and value of its work product, how it perceives its performance to be valued-by or compared to others.

Can a machine be original or creative?

Turing reduces this to the question of whether a machine can "take us by surprise" and argues that this is obviously true, as any programmer can attest.[68] He notes that, with enough storage capacity, a computer can behave in an astronomical number of different ways.[69] It must be possible, even trivial, for a computer that can represent ideas to combine them in new ways. (Douglas Lenat's Automated Mathematician, as one example, combined ideas to discover new mathematical truths.) Kaplan and Haenlein suggest that machines can display scientific creativity, while it seems likely that humans will have the upper hand where artistic creativity is concerned.[70]

In 2009, scientists at Aberystwyth University in Wales and the U.K's University of Cambridge designed a robot called Adam that they believe to be the first machine to independently come up with new scientific findings.[71] Also in 2009, researchers at Cornell developed Eureqa, a computer program that extrapolates formulas to fit the data inputted, such as finding the laws of motion from a pendulum's motion.

Can a machine be benevolent or hostile?

This question (like many others in the philosophy of artificial intelligence) can be presented in two forms. "Hostility" can be defined in terms function or behavior, in which case "hostile" becomes synonymous with "dangerous". Or it can be defined in terms of intent: can a machine "deliberately" set out to do harm? The latter is the question "can a machine have conscious states?" (such as intentions) in another form.[45]

The question of whether highly intelligent and completely autonomous machines would be dangerous has been examined in detail by futurists (such as the Singularity Institute). (The obvious element of drama has also made the subject popular in science fiction, which has considered many differently possible scenarios where intelligent machines pose a threat to mankind.)

One issue is that machines may acquire the autonomy and intelligence required to be dangerous very quickly. Vernor Vinge has suggested that over just a few years, computers will suddenly become thousands or millions of times more intelligent than humans. He calls this "the Singularity."[72] He suggests that it may be somewhat or possibly very dangerous for humans.[73] This is discussed by a philosophy called Singularitarianism.

In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[72]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[74] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[75][76]

The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue.[77] They point to programs like the Language Acquisition Device which can emulate human interaction.

Some have suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[78]

Can a machine have a soul?

Finally, those who believe in the existence of a soul may argue that "Thinking is a function of man's immortal soul." Alan Turing called this "the theological objection". He writes

In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.[79]

Views on the role of philosophy

Some scholars argue that the AI community's dismissal of philosophy is detrimental. In the Stanford Encyclopedia of Philosophy, some philosophers argue that the role of philosophy in AI is underappreciated.[2] Physicist David Deutsch argues that without an understanding of philosophy or its concepts, AI development would suffer from a lack of progress.[80]

Bibliography & Conferences

The main bibliography on the subject, with several sub-sections, is on PhilPapers

The main conference series on the issue is "Philosophy and Theory of AI" (PT-AI), run by Vincent C. Müller

See also

Notes

  1. ^ McCarthy, John. "The Philosophy of AI and the AI of Philosophy". jmc.stanford.edu. Retrieved 2018-09-18.
  2. ^ a b Bringsjord, Selmer; Govindarajulu, Naveen Sundar (2018), "Artificial Intelligence", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Fall 2018 ed.), Metaphysics Research Lab, Stanford University, retrieved 2018-09-18
  3. ^ Russell & Norvig 2003, p. 947 define the philosophy of AI as consisting of the first two questions, and the additional question of the ethics of artificial intelligence. Fearn 2007, p. 55 writes "In the current literature, philosophy has two chief roles: to determine whether or not such machines would be conscious, and, second, to predict whether or not such machines are possible." The last question bears on the first two.
  4. ^ a b This is a paraphrase of the essential point of the Turing test. Turing 1950, Haugeland 1985, pp. 6–9, Crevier 1993, p. 24, Russell & Norvig 2003, pp. 2–3 and 948
  5. ^ a b McCarthy et al. 1955. This assertion was printed in the program for the Dartmouth Conference of 1956, widely considered the "birth of AI."also Crevier 1993, p. 28
  6. ^ a b Newell & Simon 1976 and Russell & Norvig 2003, p. 18
  7. ^ a b c d This version is from Searle (1999), and is also quoted in Dennett 1991, p. 435. Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." (Searle 1980, p. 1). Strong AI is defined similarly by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."
  8. ^ a b Hobbes 1651, chpt. 5
  9. ^ See Russell & Norvig 2003, p. 3, where they make the distinction between acting rationally and being rational, and define AI as the study of the former.
  10. ^ Saygin 2000.
  11. ^ Turing 1950 and see Russell & Norvig 2003, p. 948, where they call his paper "famous" and write "Turing examined a wide variety of possible objections to the possibility of intelligent machines, including virtually all of those that have been raised in the half century since his paper appeared."
  12. ^ Turing 1950 under "The Argument from Consciousness"
  13. ^ Russell & Norvig 2003, p. 3
  14. ^ Russell & Norvig 2003, pp. 4–5, 32, 35, 36 and 56
  15. ^ Russell and Norvig would prefer the word "rational" to "intelligent".
  16. ^ Russell & Norvig (2003, pp. 48–52) consider a thermostat a simple form of intelligent agent, known as a reflex agent. For an in-depth treatment of the role of the thermostat in philosophy see Chalmers (1996, pp. 293–301) "4. Is Experience Ubiquitous?" subsections What is it like to be a thermostat?, Whither panpsychism?, and Constraining the double-aspect principle.
  17. ^ Dreyfus 1972, p. 106
  18. ^ Pitts & McCullough 1943
  19. ^ Moravec 1988
  20. ^ Kurzweil 2005, p. 262. Also see Russell & Norvig, p. 957 and Crevier 1993, pp. 271 and 279. The most extreme form of this argument (the brain replacement scenario) was put forward by Clark Glymour in the mid-1970s and was touched on by Zenon Pylyshyn and John Searle in 1980
  21. ^ Eugene Izhikevich (2005-10-27). "Eugene M. Izhikevich, Large-Scale Simulation of the Human Brain". Vesicle.nsi.edu. Archived from the original on 2009-05-01. Retrieved 2010-07-29.
  22. ^ Hubert Dreyfus writes: "In general, by accepting the fundamental assumptions that the nervous system is part of the physical world and that all physical processes can be described in a mathematical formalism which can, in turn, be manipulated by a digital computer, one can arrive at the strong claim that the behavior which results from human 'information processing,' whether directly formalizable or not, can always be indirectly reproduced on a digital machine." (Dreyfus 1972, pp. 194–5). John Searle writes: "Could a man made machine think? Assuming it possible produce artificially a machine with a nervous system, ... the answer to the question seems to be obviously, yes ... Could a digital computer think? If by 'digital computer' you mean anything at all that has a level of description where it can be correctly described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think." (Searle 1980, p. 11)
  23. ^ Searle 1980, p. 7
  24. ^ Searle writes "I like the straight forwardness of the claim." Searle 1980, p. 4
  25. ^ Dreyfus 1979, p. 156
  26. ^ Haugeland 1985, p. 5
  27. ^ Gödel, Kurt, 1951, Some basic theorems on the foundations of mathematics and their implications in Solomon Feferman, ed., 1995. Collected works / Kurt Gödel, Vol. III. Oxford University Press: 304-23. - In this lecture, Gödel uses the incompleteness theorem to arrive at the following disjunction: (a) the human mind is not a consistent finite machine, or (b) there exist Diophantine equations for which it cannot decide whether solutions exist. Gödel finds (b) implausible, and thus seems to have believed the human mind was not equivalent to a finite machine, i.e., its power exceeded that of any finite machine. He recognized that this was only a conjecture, since one could never disprove (b). Yet he considered the disjunctive conclusion to be a "certain fact".
  28. ^ Lucas 1961, Russell & Norvig 2003, pp. 949–950, Hofstadter 1979, pp. 471–473,476–477
  29. ^ Graham Oppy (20 January 2015). "Gödel's Incompleteness Theorems". Stanford Encyclopedia of Philosophy. Retrieved 27 April 2016. These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail.
  30. ^ Stuart J. Russell; Peter Norvig (2010). "26.1.2: Philosophical Foundations/Weak AI: Can Machines Act Intelligently?/The mathematical objection". Artificial Intelligence: A Modern Approach (3rd ed.). Upper Saddle River, NJ: Prentice Hall. ISBN 978-0-13-604259-4. ...even if we grant that computers have limitations on what they can prove, there is no evidence that humans are immune from those limitations.
  31. ^ Mark Colyvan. An introduction to the philosophy of mathematics. Cambridge University Press, 2012. From 2.2.2, 'Philosophical significance of Gödel's incompleteness results': "The accepted wisdom (with which I concur) is that the Lucas-Penrose arguments fail."
  32. ^ LaForte, G., Hayes, P. J., Ford, K. M. 1998. Why Gödel's theorem cannot refute computationalism. Artificial Intelligence, 104:265-286, 1998.
  33. ^ Russell & Norvig 2003, p. 950 They point out that real machines with finite memory can be modeled using propositional logic, which is formally decidable, and Gödel's argument does not apply to them at all.
  34. ^ Hofstadter 1979
  35. ^ According to Hofstadter 1979, pp. 476–477, this statement was first proposed by C. H. Whiteley
  36. ^ Hofstadter 1979, pp. 476–477, Russell & Norvig 2003, p. 950, Turing 1950 under "The Argument from Mathematics" where he writes "although it is established that there are limitations to the powers of any particular machine, it has only been stated, without sort of proof, that no such limitations apply to the human intellect."
  37. ^ Penrose 1989
  38. ^ Litt, Abninder; Eliasmith, Chris; Kroon, Frederick W.; Weinstein, Steven; Thagard, Paul (6 May 2006). "Is the Brain a Quantum Computer?". Cognitive Science. 30 (3): 593–603. doi:10.1207/s15516709cog0000_59. PMID 21702826.
  39. ^ Dreyfus 1972, Dreyfus 1979, Dreyfus & Dreyfus 1986. See also Russell & Norvig 2003, pp. 950–952, Crevier, 1993 & 120-132 and Hearn 2007, pp. 50–51
  40. ^ Russell & Norvig 2003, pp. 950–51
  41. ^ Turing 1950 under "(8) The Argument from the Informality of Behavior"
  42. ^ Russell & Norvig 2003, p. 52
  43. ^ See Brooks 1990 and Moravec 1988
  44. ^ Crevier 1993, p. 125
  45. ^ a b Turing 1950 under "(4) The Argument from Consciousness". See also Russell & Norvig, pp. 952–3, where they identify Searle's argument with Turing's "Argument from Consciousness."
  46. ^ Russell & Norvig 2003, p. 947
  47. ^ "[P]eople always tell me it was very hard to define consciousness, but I think if you're just looking for the kind of commonsense definition that you get at the beginning of the investigation, and not at the hard nosed scientific definition that comes at the end, it's not hard to give commonsense definition of consciousness." The Philosopher's Zone: The question of consciousness. Also see Dennett 1991
  48. ^ Blackmore 2005, p. 2
  49. ^ Russell & Norvig 2003, pp. 954–956
  50. ^ For example, John Searle writes: "Can a machine think? The answer is, obvious, yes. We are precisely such machines." (Searle 1980, p. 11)
  51. ^ Searle 1980. See also Cole 2004, Russell & Norvig 2003, pp. 958–960, Crevier 1993, pp. 269–272 and Hearn 2007, pp. 43–50
  52. ^ Searle 1980, p. 13
  53. ^ Searle 1984
  54. ^ Cole 2004, 2.1, Leibniz 1714, 17
  55. ^ Cole 2004, 2.3
  56. ^ Searle 1980 under "1. The Systems Reply (Berkeley)", Crevier 1993, p. 269, Russell & Norvig 2003, p. 959, Cole 2004, 4.1. Among those who hold to the "system" position (according to Cole) are Ned Block, Jack Copeland, Daniel Dennett, Jerry Fodor, John Haugeland, Ray Kurzweil and Georges Rey. Those who have defended the "virtual mind" reply include Marvin Minsky, Alan Perlis, David Chalmers, Ned Block and J. Cole (again, according to Cole 2004)
  57. ^ Cole 2004, 4.2 ascribes this position to Ned Block, Daniel Dennett, Tim Maudlin, David Chalmers, Steven Pinker, Patricia Churchland and others.
  58. ^ Searle 1980 under "2. The Robot Reply (Yale)". Cole 2004, 4.3 ascribes this position to Margaret Boden, Tim Crane, Daniel Dennett, Jerry Fodor, Stevan Harnad, Hans Moravec and Georges Rey
  59. ^ Quoted in Crevier 1993, p. 272
  60. ^ Searle 1980 under "3. The Brain Simulator Reply (Berkeley and M.I.T.)" Cole 2004 ascribes this position to Paul and Patricia Churchland and Ray Kurzweil
  61. ^ Searle 1980 under "5. The Other Minds Reply", Cole 2004, 4.4. Turing 1950 makes this reply under "(4) The Argument from Consciousness." Cole ascribes this position to Daniel Dennett and Hans Moravec.
  62. ^ Dreyfus 1979, p. 156, Haugeland 1985, pp. 15–44
  63. ^ Horst 2005
  64. ^ a b Harnad 2001
  65. ^ a b c Turing 1950 under "(5) Arguments from Various Disabilities"
  66. ^ a b Quoted in Crevier 1993, p. 266
  67. ^ Crevier 1993, p. 266
  68. ^ Turing 1950 under "(6) Lady Lovelace's Objection"
  69. ^ Turing 1950 under "(5) Argument from Various Disabilities"
  70. ^ "Kaplan Andreas; Michael Haenlein". Business Horizons. 62 (1): 15–25. January 2019. doi:10.1016/j.bushor.2018.08.004.
  71. ^ Katz, Leslie (2009-04-02). "Robo-scientist makes gene discovery-on its own | Crave - CNET". News.cnet.com. Retrieved 2010-07-29.
  72. ^ a b Scientists Worry Machines May Outsmart Man By JOHN MARKOFF, NY Times, July 26, 2009.
  73. ^ The Coming Technological Singularity: How to Survive in the Post-Human Era, by Vernor Vinge, Department of Mathematical Sciences, San Diego State University, (c) 1993 by Vernor Vinge.
  74. ^ Call for debate on killer robots, By Jason Palmer, Science and technology reporter, BBC News, 8/3/09.
  75. ^ Science New Navy-funded Report Warns of War Robots Going "Terminator", by Jason Mick (Blog), dailytech.com, February 17, 2009.
  76. ^ Navy report warns of robot uprising, suggests a strong moral compass, by Joseph L. Flatley engadget.com, Feb 18th 2009.
  77. ^ AAAI Presidential Panel on Long-Term AI Futures 2008-2009 Study, Association for the Advancement of Artificial Intelligence, Accessed 7/26/09.
  78. ^ Article at Asimovlaws.com, July 2004, accessed 7/27/09. Archived June 30, 2009, at the Wayback Machine
  79. ^ Turing 1950 under "(1) The Theological Objection", although he also writes, "I am not very impressed with theological arguments whatever they may be used to support"
  80. ^ Deutsch, David (2012-10-03). "Philosophy will be the key that unlocks artificial intelligence | David Deutsch". the Guardian. Retrieved 2018-09-18.

References

Page numbers above and diagram contents refer to the Lyceum PDF print of the article.