Sign In  |  Register  |  About Sunnyvale  |  Contact Us

Sunnyvale, CA
September 01, 2020 10:10am
7-Day Forecast | Traffic
  • Search Hotels in Sunnyvale

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

In the echo chamber

Well, that was surreal. I testified in a hearing about AI and the future of journalism held by the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Here is my written testimony and here’s the Reader’s Digest version in my opening remarks: It was a privilege and honor to be invited to air my […] The post In the echo chamber appeared first on BuzzMachine .

Well, that was surreal. I testified in a hearing about AI and the future of journalism held by the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Here is my written testimony and here’s the Reader’s Digest version in my opening remarks:

It was a privilege and honor to be invited to air my views on technology and the news. I went in knowing I had a role to play, as the odd man out. The other witnesses were lobbyists for the newspaper/magazine and broadcast industries and the CEO of a major magazine company. The staff knew I would present an alternative perspective. My fellow panelists noted before we sat down — nicely — that they disagreed with my written testimony. Job done. There was little opportunity to disagree in the hearing, for one speaks only when spoken to.

What struck me about the experience is not surprising: They call the internet an echo chamber. But, of course, there’s no greater echo chamber than Congress: lobbyists and legislators agreeing with each other about the laws they write and promote together. That’s what I witnessed in the hearing in a few key areas:

Licensing: The industry people and the politicians all took as gospel the idea that AI companies should have to license and pay for every bit of media content they use. 

I disagree. I draw the analogy to what happened when radio started. Newspapers tried everything to keep radio out of news. In the end, to this day, radio rips and reads newspapers, taking in and repurposing information. That’s to the benefit of an informed society.

Why shouldn’t AI have the same right? I ask. Some have objected to my metaphor: Yes, I know, AI is a program and the machine doesn’t read or learn or have rights any more than a broadcast tower can listen and speak and vote. I spoke metaphorically, for if I had instead argued that, say, Google or Meta has a right to read and learn, that would have opened up a whole can of PR worms. The point is obvious, though: If AI creators would be required by law to license *everything* they use, that grants them lesser rights than media — including journalists, who, let’s be clear, read, learn from, and repurpose information from each other and from sources every day. 

I think there’s a difference in using content to train a model versus producing output. It’s one matter for large language models to be taught the relationship of, say, the words “White” and “House.” I say that is fair and transformative use. But it’s a fair discussion to separate out questions of proper acquisition and terms of use when an application quotes from copyrighted material from behind a paywall in its output. The magazine executive cleverly conflated training and output, saying *any* use required licensing and payment. I believe that sets a dangerous precedent for news media itself. 

If licensing and payment is required for all use of all content, then I say the doctrine of fair use could be eviscerated. The senators argued just the opposite, saying that if fair use is expanded, copyright becomes meaningless. We disagree. 

JCPA: The so-called Journalism Competition and Preservation Act is a darling of many members of the committee. Like Canada’s disastrous Bill C-18 and Australia’s corrupt News Media Bargaining Code — which the senators and the lobbyists think are wonderful — the JCPA would allow large news organizations (those that earn more than $100,000 a year, leaving out countless small, local enterprises) to sidestep antitrust and gang together and force platforms to “negotiate” for the right to link to their content. It’s legislated blackmail. I didn’t have the chance to say that. Instead, the lobbyists and legislators all agreed how much they love the bill and can’t wait to try again to pass it. 

Section 230: Members of the committee also want to pass legislation to exclude generative AI from the protections of Section 230, which enables public discourse online by protecting platforms from liability for what users say there while also allowing companies to moderate what is said. The chair said no witness in this series of hearings on AI has disagreed. I had the opportunity to say that he has found his first disagreement.

I always worry about attempts to slice away Section 230’s protections like a deli balogna. But more to the point, I tried to explain that there is nuance in deciding where liability should lie. In the beginning of print, printers were held liable — burned, beheaded, and behanded — for what came off their presses; then booksellers were responsible for what they sold; until ultimately authors were held responsible — which, some say, was the birth of the idea of authorship. 

When I attended a World Economic Forum AI governance summit, there was much discussion about these questions in relation to AI. Holding the models liable for everything that could be done with them would, in my view, be like blaming the printing press for what is put on and what comes off it. At the event, some said responsibility should lie at the application level. That could be true if, for example, Michael Cohen was misled by Google when it placed Bard next to search, letting him believe it would act like search and giving him bogus case citations instead. I would say that responsiblity generally lies with the user, the person who instructs the program to say something bad or who uses the program’s output without checking it, as Cohen did. There is nuance.

Deep fakery: There was also some discussion of the machine being used to fool people and whether, in the example used, Meta should be held responsible and expected to verify and take down a fake video of someone made with AI — or else be sued. As ever, I caution against legislating official truth.  

The most amusing moment in the hearing was when the senator from Tennessee complained that media are liberal and AI is liberal and for proof she said that if one asks ChatGPT to write a poem praising Donald Trump, it will refuse. But it would write a poem praising Joe Biden and she proceeded to read it to me. I said it was bad poetry. (BTW, she’s right: both ChatGPT and Bard won’t sing the praises of Trump but will say nice things about Biden. I’ll leave the discussion about so-called guardrails to another day.)

It was a fascinating experience. I was honored to be included. 

For the sake of contrast, in the morning before the hearing, I called Sven Størmer Thaulow, chief data and technology officer for Schibsted, the much-admired (and properly so) news and media company of Scandinavia. Last summer, Thaulow called for Norwegian media companies to contribute their content freely to make a Norwegian-language large language model. “The response,” the company said, “was overwhelmingly positive.” I wanted to hear more. 

Thaulow explained that they are examining the opportunities for a native-language LLM in two phases: first research, then commercialization. In the research phase now, working with universities, they want to see whether a native model beats an English-language adaptation, and in their benchmark tests, it does. As a media company, Schibsted has also experimented with using generative AI to allow readers to query its database of gadget reviews in conversation, rather than just searching — something I wish US news organizations would do: Instead of complaining about the technology, use it to explore new opportunities.

Media companies contributed their content to the research. A national organization made a blanket deal and individual companies were free to opt out. Norway being Norway — sane and smart — 90 percent of its books are already digitized and the project may test whether adding them will improve the model’s performance. If it does, they and government will deal with compensation then. 

All of this is before the commercial phase. When that comes, they will have to grapple with fair shares of value. 

How much more sensible this approach is to what we see in the US, where technology companies and media companies face off, with Capitol Hill as as their field of play, each side trying to play the refs there. The AI companies, to my mind, rushed their services to market without sufficient research about impact and harm, misleading users (like hapless Michael Cohen) about their capabilities. Media companies rushed their lobbyists to Congress to cash in the political capital earned through journalism to seek protectionism and favors from the politicians their journalists are supposed to cover, independently. Politicians use legislation to curry favor in turn with powerful and rich industries. 

Why can’t we be more like Norway?

The post In the echo chamber appeared first on BuzzMachine.

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 Sunnyvale.com & California Media Partners, LLC. All rights reserved.