“The real question is not whether machines think, but whether men do.” — B.F. Skinner
Artificial Intelligence (AI) has advanced so rapidly that machines can now perform tasks once thought to require human intelligence — from diagnosing diseases to composing music. Yet, amid this technological marvel, a deeper question lingers: Can AI truly think like us? And if not, should we allow it to make decisions that affect our lives?
The Illusion of Thinking
At first glance, AI systems seem to think. They learn, predict, and even create. But this intelligence is synthetic, not sentient. Machines process symbols and patterns, not meanings.
“AI doesn’t understand — it calculates.”
When we interact with a chatbot or autonomous system, what we perceive as “understanding” is actually a sophisticated illusion. The machine predicts the next most likely word or action based on data, not insight.
Human thinking, by contrast, is layered with consciousness, emotion, and moral judgment. We think about things — guided by empathy, curiosity, and purpose. AI lacks this inner dimension. It can mimic reasoning but cannot experience it.

When Machines Decide
Even though AI cannot truly think, we increasingly entrust it to decide. Algorithms determine who gets a loan, which job applicant is shortlisted, and even which prisoner might reoffend. This delegation of moral and social power brings serious ethical implications.
1. Accountability
Who is to blame when AI makes a harmful decision? The programmer, the user, or the machine? Since AI lacks moral agency, it cannot be held accountable. Responsibility must remain human, yet often, systems are so complex that no single person can be clearly identified as responsible.
2. Bias and Fairness
AI reflects the data it’s trained on — and if that data is biased, so are its decisions. Inverting the usual logic, we find that objectivity is not guaranteed by automation. Bias in machine learning is not a flaw of technology but of society mirrored through data.
3. Transparency
Machine learning models often operate as “black boxes.” Their decision paths are hidden even from their creators. In fields like healthcare or criminal justice, a decision without explanation is a decision without justice.
The Moral Machine
Autonomous vehicles illustrate the moral dilemmas of AI vividly. Imagine a self-driving car facing an unavoidable crash: should it sacrifice its passenger to save pedestrians?
This is not a programming question — it’s a philosophical one.
Machines lack ethical intuition; they follow instructions. Therefore, the ethics of AI lie not in the code itself, but in the intentions and values of the humans who write and deploy it.
The Path Forward
“We can only see a short distance ahead, but we can see plenty there that needs to be done.” — Alan Turing
The challenge is not to make AI human, but to make it humane.
Ethical AI should embody transparency, fairness, and accountability. Policies must ensure human oversight in critical decisions, and developers should design systems that align with shared moral principles.
Inverting the usual narrative, it’s not that AI is becoming more human — it’s that humans must become more responsible.
AI can simulate thought, but it cannot feel, understand, or care. As we grant it greater decision-making power, we must remember that ethics cannot be automated.
The future of AI depends not on machines learning to think like us — but on us ensuring that their decisions reflect the best of who we are.
“Technology is neither good nor bad; nor is it neutral.” — Melvin Kranzberg
Lateef Chaudhry
Assistant Director Department of Media and IT