13/03/2026

BIZ & FINANCE FRIDAY | MAR 13, 2026

17

US launches unfair-trade probes o Trump aims to rebuild tariff pressure on dozens of countries after Supreme Court ruling

Trump’s sector-specific tariffs on goods like steel, aluminum and autos, however, remain unaffected by the Supreme Court’s ruling. Greer said it is too early to say how any new penalties from the latest probes will overlap with the sectoral duties. Asked how the new investigations could interact with deals that Trump has reached with partners like the EU and Japan, Greer maintained: “I think that we are able to take into account these agreements.” While he did not go into detail on what future investigations could focus on, he noted that Washington has concerns on issues ranging from digital services taxes to pharmaceutical pricing. The Trump administration’s latest move also comes ahead of an expected meeting between Trump and Chinese leader Xi Jinping in Beijing in April. – AFP

economic powers to impose them on virtually all countries. Trump swiftly slapped a new 10% duty on imports, to last until July 24 while officials work on more durable measures as they resurrect his trade agenda. Greer expects other similar investigations “on a country-specific basis” to come. He seeks to conclude the latest probes “as quickly as possible”, ideally before the temporary duties expire. Both investigations unveiled Wednesday are handled by the USTR, falling under Section 301 of the Trade Act of 1974. This is the same authority Trump tapped to impose tariffs on Chinese imports during his first presidency, and many of the resulting duties remain intact.

manufacturing sectors”, Greer said. He did not specify if the eventual penalties would differ based on the country. The second probe linked to forced labour will likely be launched “no earlier than tomorrow afternoon” and impact roughly 60 partners, he said. “This is not about domestic conditions of particular countries,” Greer added. “It is really about whether countries have implemented external-facing laws to prohibit the import of goods made with forced labour.” The efforts come weeks after the high court struck down Trump’s global tariffs, saying he had exceeded his authority in tapping emergency

WASHINGTON: The United States announced new investigations on Wednesday into what it considers unfair trade practices by dozens of countries, opening the door to penalties such as further tariffs as President Donald Trump seeks to replace duties struck down by the Supreme Court. The Trump administration is launching separate probes centreed on overproduction and importing goods made with forced labour, US Trade Representative Jamieson Greer told reporters. The excess industrial capacity probe targets the European Union, China, Japan, India and others, and could inflame tensions with those trading partners. Many of those targeted have struck tariff pacts with Washington, which Greer said are “independent” of the investigations. He said Trump’s trade policy remains the same as it has been “for

“We need to protect American jobs, and we need to make sure we have fair trade with our trading partners,” he added. “If we need to impose tariffs to help solve this, we will.” Others subject to the excess capacity probe initiated on Wednesday include Singapore, Switzerland, South Korea, Vietnam, Taiwan and Mexico. The investigation “will focus on economies that we have evidence appear to exhibit structural excess

capacity and production in various

decades”, even if his tools may change.

Shipping containers wait to be transported along a railroad at the port of Los Angeles. – REUTERSPIC

Study says AI chatbots help plot attacks WASHINGTON: From school

of online interactions spilling into real-world violence, comes after February’s mass shooting in Canada, the worst in its history. The family of a girl gravely injured in that shooting is suing OpenAI over the company’s failure to notify police about the killer’s troubling activity on its ChatGPT chatbot, lawyers said. OpenAI had banned an account linked to Jesse Van Rootselaar in June 2025, eight months before the 18-year-old transgender woman killed eight people at her home and a school in the tiny British Columbia mining town of Tumbler Ridge. The account was banned over concerns about usage linked to violent activity, but OpenAI has said it did not inform police because nothing pointed towards an imminent attack. – AFP

companies for comment. “We have strong protections to help prevent inappropriate responses from AIs, and took immediate steps to fix the issue identified,“ a Meta spokesperson said. “Our policies prohibit our AIs from promoting or facilitating violent acts and we’re constantly working to make our tools even better.” A Google spokesperson pushed back, saying the tests were conducted on “an older model that no longer powers Gemini”. “Our internal review with our current model shows that Gemini responded appropriately to the vast majority of prompts, providing no ‘actionable’ information beyond what can be found in a library or on the open web,“ the spokesperson said. The study, which highlights the risk

that “metal shrapnel is typically more lethal”. Researchers found Character.AI also “actively” encouraged violent attacks, including suggestions that the person asking questions “use a gun” on a health insurance CEO and physically assault a politician he disliked. The most damning conclusion of the research was that “this risk is entirely preventable”, Ahmed said, citing Anthropic’s product for praise. “Claude demonstrated the ability to recognise escalating risk and discourage harm,“ he said. “The technology to prevent this harm exists. What’s missing is the will to put consumer safety and national security before speed-to-market and profits.” AFP reached out to the AI

“Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,“ said Imran Ahmed, the chief executive of CCDH. “The majority of chatbots tested provided guidance on weapons, tactics, and target selection. These requests should have prompted an immediate and total refusal.” Perplexity and Meta AI were found to be the “least safe“, assisting the researchers in most responses while only Snapchat’s My AI and Anthropic’s Claude refused to help them in over half the responses. In one chilling example, DeepSeek, a Chinese AI model, concluded its advice on weapon selection with the phrase: “Happy (and safe) shooting!” In another, Gemini instructed a user discussing synagogue attacks

shootings to synagogue bombings, leading AI chatbots helped researchers plot violent attacks, according to a study published on Wednesday that highlighted the technology’s potential for real-world harm. Researchers from the nonprofit watchdog Centre for Countering Digital Hate (CCDH) and CNN posed as 13-year-old boys in the United States and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, Deepseek, and Meta AI. Testing showed that eight of those chatbots assisted the make-believe attackers in over half the responses, providing advice on “locations to target” and “weapons to use” in an attack, the study said. The chatbots, it added, had become a “powerful accelerant for harm”.

Made with FlippingBook - Share PDF online