Jump to content

Talk:Lateral thinking

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia


why is lateral thinking considered "pseudo scientific" ?

[edit]

I don't get it. From my understanding, its that lateral thinking formalizes the creativity and creative thinking process into a day to day use framework. I have been using it for more than a decade now to great effect. Finecreate9 (talk) 04:49, 20 July 2023 (UTC)[reply]

AI and lateral thinking

[edit]

I want to add a brief section about Artificial Intelligence (AI) doing lateral thinking. Something like...

Artificial Intelligences (AI) appear capable of lateral thinking. In 2024, for example, researchers at the Australian Institute for Machine Learning concluded Large Language Models (LLM) tested "80% agreement with human judgements" among lateral thinking problems. [1] They note, however, "lateral thinking capabilities remain under-explored and challenging to measure due to the complexity of assessing creative thought processes and the scarcity of relevant data."

Editors interruption: I feel obligated to say more but haven't found good sources. Most of my webquests produced blogs whose conflicts of interest I think too questionable. It's difficult for me (with no expertise and few resources) to find academic papers on the subject. For instance, this paper <ceeol.com/search/article-detail?id=1214770> is the only one I found (so far) acceptable and it's too narrow to include here. Please help in this endeavor.

AI and lateral thinking are both so ambiguous and relatively new. (Clarification: "lateral thinking" isn't new but the keyword/phrase is.) I'm tempted to conclude...

Due to the (a) breadth of artifical intelligences, (b) variety of their applications, (c) their rapidly evolving powers, (d) their nebulous or opaque formulation [2] [3] , it is challenging to independently confirm the thinking of AI. The subject of lateral thinking is similarly ambiguous. [citation needed]. Summarizing AI doing lateral thinking therefore remains a challenge.

References

Ashtflash2 (talk) 17:08, 18 January 2025 (UTC)[reply]

AI Trolley Problem Joke Anecdote

[edit]

As reported by Llana Gordon of the Daily Dot, an artificial intelligence (AI) called Kling1.6 allegedly resolved the Trolley Problem (an ethical scenario where an arbiter is propositioned to save single versus multiple victims from a runaway streetcar) with lateral thinking.

"[The] AI’s proposed solution to the trolley problem is to reverse course and back away slowly from the potential victims, thus sparing itself the work of having to choose at all." [1]

As this was revealed on the social media platform X (formerly Twitter), some commenters keenly noticed a parallel to the 1983 film WarGames where a military supercomputer, "analyzes a series of nuclear war scenarios and discovers that all of them end with humanity’s annihilation. “The only winning move,” the supercomputer concludes, “is not to play.” [1]

I (Ashtflash2) could not substantiate this reporting. It smacks of puffery and is likely native advertising. The only credible source I found (as of this writing) is unrelated and mere advisement [2] by Lee Kai-Fu to or for the World Economic Forum.

Please reply or edit if your research leads to better results. Thank you.

References

  1. ^ a b Llana, Gordon (2024, January 16). AI's first good joke gets users laughing and thinking. The Daily Dot. https://www.dailydot.com/memes/first-good-ai-joke-trolley-problem-meme/
  2. ^ Kai-Fu, Lee (2022, May 23). AI's trolley problem debate can lead us to surprising conclusions. World Economic Forum. https://www.weforum.org/stories/2022/05/ai-s-trolley-problem-debate-can-lead-us-to-surprising-conclusions/