Monday, April 28, 2025
No menu items!

Think about the risks before rushing to use AI

Must Read

Write an article about Think about the risks before rushing to use AI .Organize the content with appropriate headings and subheadings (h1, h2, h3, h4, h5, h6), Retain any existing tags from

From Redzanur Rahman

Like many Malaysians, I was shocked and disappointed to learn that the education ministry’s SPM analysis report featured a wrongly drawn Jalur Gemilang.

Apparently, the image used, which may have been using Artificial Intelligence tools, showed our flag with two stars! This is a very serious mistake because the Jalur Gemilang is a symbol of our nation’s pride.

What’s truly unbelievable is that this mistake came from the education ministry on the day they announced the SPM results! It’s a huge irony.

As the ministry celebrates the fantastic achievement of over 10,000 straight A students, this embarrassing error with our flag happened right under their noses. The ministry needs to answer for this, and the people responsible for this oversight must be held accountable.

Sadly, this isn’t the first time something like this has happened recently. Just last week, we saw similar mistakes which reportedly also involved AI-generated images. Both the Sin Chew Daily and Kwong Wah Yit Poh newspapers published images of the Jalur Gemilang without the crescent moon. Even a foreign company at a baby expo displayed our flag incorrectly in their video.

These incidents reflect a worrying trend. While AI can be very useful, using it carelessly without proper checks can lead to mistakes and hurt people’s feelings, especially when it involves national symbols.

It seems we are rushing to use AI without thinking carefully about the risks.

Why outsource our thinking to AI?

We must remember that convenience should never come at the expense of truth and responsibility. This principle is crucial because as the recent flag incidents clearly show, unchecked AI can easily produce output that is factually incorrect or deeply insensitive.

This failure to uphold the ‘truth’ isn’t just about getting facts wrong; it extends to generating content that disrespects national and cultural symbols or values because AI lacks genuine understanding, which can severely damage an organisation’s reputation and upset the public.

Blindly accepting AI’s output for convenience can also compromise fairness. AI systems learn from the data they are fed, and if this data contains hidden biases related to race or gender, AI might perpetuate or even amplify these biases. This could lead to unfair decisions in important areas like job hiring or loan approvals – directly contradicting the Malaysian value of treating everyone fairly.

Moreover, the convenience of using easily accessible online AI tools introduces significant privacy and security risks. Staff might inadvertently feed confidential company strategies or sensitive customer data such as IC numbers or addresses into these systems, potentially exposing this information to misuse or breaches and possibly violating the Personal Data Protection Act.

The speed and ease offered by AI cannot justify the potential harm caused by spreading inaccurate information, making biased decisions, or failing to protect sensitive data.

This doesn’t mean we should abandon AI altogether, as it certainly offers powerful ways to work faster and smarter. However, the key lies in using it responsibly, which involves a few simple but vital steps.

Always double-check the AI’s output with human oversight. This is the most critical part; treat AI as a helpful tool, like a fast calculator, but remember it lacks human common sense and understanding.

Before publishing or relying on any AI-generated content, whether it’s text, images, or reports, a person must review it carefully for accuracy, sensitivity, and whether it actually makes sense in the context – no more blindly copying and pasting!

Companies need to set clear rules and provide training. This means having simple policies outlining which AI tools are safe to use, what kind of company or customer information absolutely should not be entered into public AI, and the required steps for checking AI work. Staff need to be trained on these rules so everyone understands how to use AI safely and ethically.

We must all be careful with the data we input and the specific tools we choose. Always think twice before feeding sensitive information into an AI, prioritise using secure tools (perhaps even private, company-managed AI for highly confidential tasks), and ensure any AI service used respects data privacy and complies with Malaysian laws.

The recent mistakes involving our beloved Jalur Gemilang are a wake-up call. As we leverage AI’s powerful capabilities, remember that it is only as good as it is developed. AI can never replace the human brain and we should not outsource all our thinking to AI.

 

Redzanur Rahman is a cloud engineer and an FMT reader.

The views expressed are those of the writer and do not necessarily reflect those of FMT.

and integrate them seamlessly into the new content without adding new tags. Include conclusion section and FAQs section at the end. do not include the title. it must return only article i dont want any extra information or introductory text with article e.g: ” Here is rewritten article:” or “Here is the rewritten content:”

Latest News

PKR Members Urged To Lodge Election Complaints Through Official Channels

Write an article about KUALA LUMPUR, April 28 (Bernama) -- PKR...

More Articles Like This