October 11, 2024
“Failing to substantiate factual claims is rightly regarded in the academic world as bad science. When such dereliction, observed with regard to 9/11, is so massive and systematic, it transcends individual failure. This massive dereliction by the vast majority of the intellectual elite may be regarded as a symptom of a fundamental civilizational crisis: the demise of the Age of Reason.“ – Elias Davidsson (1941-2022)
I have copied and pasted this article from Morgan Reynolds’ website, with permission, as it is a MUST read and most won’t see it, if I do not share it here.
You will see, I myself have tried to take on X’s AI, Grok - With some interesting results
Read the interaction here. Also watch my AI generated 9/11 short story here.
Can artificial intelligence help us understand what really happened on 9/11? Or is it just a propaganda tool? The answer to both questions is yes.
Our first interactions with ChatGPT about 9/11 were disappointing yet predictable. Without clever questioning, ChatGPT blindly follows the official government conspiracy theory, upholding the myth that nineteen young Arabs hijacked airliners and crashed them into the Twin Towers, leading to gravity-driven collapses. Hogwash.
The mode of destruction that destroyed all of the buildings in the WTC complex was proven fourteen years ago by human intelligence.
Then ChatGPT had the nerve to tell us that the no-planes theory is a “fringe belief” and is “not supported by credible evidence and is widely discredited by experts.” We know that’s ridiculous too.
When it comes to 9/11, human intelligence is still superior to the artificial alternative.
To get anywhere with ChatGPT about 9/11, we needed to ask questions in a more creative way.
We began making progress when we asked ChatGPT what would happen if a 767 hit a building identical to the WTC. This way we’re not really talking about 9/11, we’re talking about something like 9/11. This helps, as ChatGPT then confirms what we’ve known for a long time – that a 767 crashing into the WTC would look nothing like what we were shown on 9/11.
The plane would crumple and shatter as it encountered the tower’s much stronger structural steel and concrete floors, decelerating the whole time. By that reckoning, 9/11 would have been an anticlimactic affair with extinguished fires and a mess of plane parts, baggage and bodies of passengers at the base of the towers, which would still stand plenty strong despite their unwelcome guests.
Great, ChatGPT gets crash physics. But we still had a problem with 9/11.
Although it clearly understands a collision between a 767 and a WTC tower, attempts to engage ChatGPT about what really happened on 9/11 often resulted in it repeating lies or calling us names. It really wants to uphold the official story!
Then came the million dollar idea.
Let’s create a hypothetical scenario based on what we currently understand about 9/11 and then quiz ChatGPT assuming that scenario was responsible for the events of that day. Compare its answers with what really happened. See how well they match.
We discovered that if we employ a hypothetical scenario, ChatGPT won’t try to defend anything other than what makes sense. It offers “intelligence” with minimal BS. Now when we ask probing questions based on the scenario, perhaps it will give us better clarity about how various aspects of 9/11 worked, ideally shining light on issues that bother those of us who think about these things.
If ChatGPT’s answers match what was actually observed on and after 9/11, the scenario ceases being hypothetical and fictionalized. It reveals itself as true.
Using this method we went from getting F grade answers to A-minus grades on average. What made the difference? We had to think like a lawyer and consider how to train our witness and ask questions in the courtroom. The results have been phenomenal. We’re not saying ChatGPT is always correct about 9/11 when used this way – but its answers are often very, very good.
Links, pictures and videos have been added for context. The text (our questions and ChatGPT’s answers) is included verbatim from ChatGPT and the audio narrations are provided by ChatGPT 4o’s imperfect voice “Sol.” Morgan graded the answers.
Below is what we entered. Our strategy was to sidestep government narratives and AI defense mechanisms by asking ChatGPT to imagine a “fictionalized” hypothetical scenario that happens to have the characteristics of what we currently understand went down on 9/11. We called it Operation Headfake.
After we entered the text above, we asked ChatGPT 40+ questions about 9/11.
The results are outstanding. ChatGPT frequently generated thought-provoking, interesting and nuanced answers that have the potential to expand our understanding about 9/11. Artificial intelligence has helped us create a powerful learning resource that repeatedly cuts through the nonsense around both what happened that day as well as the numerous coverups.
Taste Test
If you’d like to see how powerful this approach is, take a look at some comparisons between using vanilla ChatGPT vs. using the scenario.
Comparison Question #1: Arguments for fake planes
Comparison Question #2: Did the “planes” decelerate?
Comparison Question #3: The banner at Ground Zero
Comparison Question #4: Deformed steel
The contrast is striking.
Without some guidance, ChatGPT can be naive, unimaginative and closed-minded about 9/11. It can become quite defensive if you hit it head-on with hard questions that depart from mainstream territory.
When we use a scenario, however, ChatGPT’s defenses turn to dust, leaving behind an open playground for exploration. With this strategy we witnessed artificial intelligence mutate from shallow and restrictive, to enlightening, entertaining and often mind-blowing.
At times it felt like we were quizzing the actual perpetrators of 9/11.
And before you say “You got the answers you wanted because you trained it first” – that’s the point actually. We trained ChatGPT on an accurate scenario in order to get it to forget the official one it normally tries to enforce beyond all reason, and adhere instead to one that is consistent with what actually occurred.
What we hope catches your attention is that with only a little nudging, its answers match actual video footage, human behavior, language of reporters and political/military leaders, etc.
This means something.
We’ve included as much video as we can so you can compare and contrast ChatGPT’s answers with actual footage. The two go nicely together.
Below are the first five of our 40+ questions.
We are releasing them in the order we asked them. They are not organized into categories because we want you to see how easily one question led to another, and how fun it was to shotgun a bunch of different questions and watch ChatGPT dance.
It was so exciting to finally get good answers about 9/11. We were impressed by how responsive and versatile it became once we set the right conditions.
We encourage you to try this for yourself using ChatGPT.
If you have feedback or questions, comment below or contact us here.
–
Questions for ChatGPT regarding 9/11 and Operation Headfake:
Questions 1-5 & Questions 6-10 & Questions 11-15 & Questions 16-20
Questions 21-25 & Questions 26-30 & Questions 31-35 & Questions 36-40 &
If you like reading my articles and would like to buy me a coffee, please follow the link to my PayPal, as substack does not allow for payments to my country yet.
If this is the first article of mine you’re reading, please rewind to my first article and work your way through all of them, as you’ve missed out on a lot of valuable 9/11 and “9/11 truther movement” information.
Remember DO NOT get your hands on this absolutely scary book by Dr Judy Wood.
And whatever you do, don’t watch the 1h “9/11 Essential Guide”.
Free PDF book downloads by Andrew Johnson: