by Brian Shilhavy
Editor, Health Impact News

As someone who grew up with modern computer technology and at one time earned my living from it, and as someone who not only lived through the dot.com financial collapse but has also owned an ecommerce business for over 21 years and has survived multiple economic downturns, it has been plainly obvious to me that the current financial frenzy over chat AI hype is one of the largest developing financial bubbles being blown up with no real model of generating revenue at this time.

And yet, hardly any other financial analyst has come out to expose this very dangerous financial bubble that could burst at any time, and potentially sink the entire economy, until today.

But that financial analysis over the current spending frenzy regarding AI did not come from any financial analysts in the U.S., but by the Chinese Government.

China is the world’s second largest investor in technology start-ups by venture capitalists, with only the U.S. spending more.

Sequoia and Other U.S.-Backed VCs Are Funding China’s Answer to OpenAI

A boom in artificial intelligence startup funding sparked by OpenAI has spilled over to China, the world’s second-biggest venture capital market. Now American institutional investors are indirectly financing a rash of Chinese AI startups aspiring to be China’s answer to OpenAI.

The American investors, including U.S. endowments, back key Chinese VC firms such as Sequoia Capital China, Matrix Partners China, Qiming Venture Partners and Hillhouse Capital Management that are striking local AI startup deals, which haven’t been previously reported. U.S. government officials have grown increasingly wary of such investments in Chinese AI as well as semiconductors because they could aid a geopolitical rival. (Source.)

The Chinese government might be regulating the AI industry to prevent a financial crash over this wild speculation in the Tech sector over OpenAI, based on an opinion piece published earlier today in a Chinese financial publication.

Chinese shares related to artificial intelligence plunged after a state media outlet urged authorities to step up supervision of potential speculation.

The ChatGPT concept sector has “signs of a valuation bubble,” with many companies having made little progress in developing the technology, the Economic Daily wrote in a commentary Monday.

Regulators should strengthen monitoring and crackdown on share-price manipulation and speculation to create “a well-disclosed and well-run market,” according to the newspaper, which runs a website officially recognized by Beijing. Companies, it said, should develop the capabilities they propose, while investors should refrain from speculating.

CloudWalk Technology Co. tumbled a record 20%, while 360 Security Technology Inc. dropped by 10%, the most in three years. Beijing Haitian Ruisheng Science Technology Ltd. sank 15%. Baidu Inc. dropped 3.5% in New York trading.

Since the release of ChatGPT, Chinese shares related to the technology have surged, with domestic big techs joining the race to develop generative AI. SenseTime Group Inc. last Tuesday rose the most in two months in Hong Kong amid speculation that the SoftBank Group Corp.-backed company was developing a product to challenge ChatGPT. Shares of Alibaba suppliers also jumped on reports that the tech giant will unveil its answer to ChatGPT.

“Generative AI is the hottest trend now and many tech companies will be launching their own versions in the coming months,” said Vey-Sern Ling, managing director at Union Bancaire Privee. “While valuations may rise to such news, the actual financial impact to these companies may be difficult to gauge at this juncture, and may lead to disappointment eventually.” (Source.)

Of course, the U.S. is also threatening regulation over the Tech sector, including TikTok, which currently provides $billions to the U.S. economy.

The other huge concerns regarding the feeding frenzy over new AI technology, as I reported in a recent article, is that there are legal issues regarding privacy and copyright issues that could severely curtail using the new OpenAI technology, if not outlaw it altogether.

WARNING: Faith in Artificial Intelligence is About to Destroy America – A Total System Collapse May be Imminent

Excerpt:

I am not familiar with the DAIR Institute and these “AI ethicists,” but their alleged concerns over AI and possible government regulation should strike fear into every venture capitalist that is now pouring $billions into this technology, because the entire industry could go down in a crash by a simple act of Congress, and it appears that Italy is now doing just that, by banning OpenAI’s ChatGPT over privacy violations this past week.

Italy Bans OpenAI’s ChatGPT Over Privacy Concerns

Italy’s data protection authority has temporarily banned OpenAI’s ChatGPT over alleged privacy violations. The ban will remain in effect until OpenAI complies with the European Union’s privacy laws.

In a statement, the Italian National Authority for Personal Data Protection said ChatGPT violated the EU’s General Data Protection Regulation (GDPR) in multiple ways, including unlawfully processing people’s data and failing to prevent minors from accessing the AI chatbot.

The Italian privacy regulator warned OpenAI it has 20 days to respond to the order or face fines. (Full article.)

The other elephant in the room regarding the rush to get these AI chat bots out into the hands of the public before they are even working correctly, is how are the legal ramifications of using other entities’ copyrighted material going to be handled by the courts if these Big Tech companies get sued for copyright violation for the AI bots scraping copyrighted information from the Internet?

I was in a Technology discussion forum this past week (behind a paywall so I cannot link to it) where this very issue was being discussed.

One person who posted a thread on this topic described their experience on this topic:

“I have spent much of the last 3 decades on issues on copyright in a digital age. I ended up chairing a study committee on copyright for the National Academy of Sciences.”

This person went on to write (in part):

An LLM is really a sophisticated index to a collection of knowledge. It is well established that an index can be copyrighted independent of the collection. So an LLM is clearly protected by copyright.

However, even more established law is that any derivative work of copyrighted content requires compensation to the owner of that content. To the extent that an LLM was trained on copyrighted content compensation is owed to the owners of that content. How you can calculate that is unknown, but that is the law. There are precedents in music where even a few notes in one song contained in another are considered infringement.

Congress eventually created a statutory license to enable music streaming where the Librarian of Congress sets the rate and the streamers are required required to compute and pay that statutory rate. This is possible because each song is distinct. In an LLM, the copyrighted content is unlikely to be distinct. In any case, Congress would need to pass a law to enable this, and maybe it should.

There is only one clear conclusion here. An LLM cannot use copyrighted content without permission and will have to negotiate compensation. So far that seems largely absent from the conversation.

Read the full article here.

https://healthimpactnews.com/2023/unlike-the-u-s-china-issues-warning-about-dangerous-chatgpt-ai-financial-bubble/: