Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

May 7, 2010

Posting To Facebook Via Mobile? No Update Privacy For You!


It’s kind of crazy. I’ve been playing with Facebook’s “Posts By Everyone” search feature recently, and many people who hide their profile information have no problem sharing sometimes really personal updates with the world. Are there people on Facebook who don’t understand how to keep their updates out of the public eye? And why doesn’t Facebook allow for mobile updates to be private?

Searching Everyone’s Updates

You might have missed Facebook’s “Posts By Everyone” feature. It’s easy to overlook because search results aren’t shown by default. Consider this search for hungover:

Hungover Search On Facebook

When you start typing, Facebook suggests some options right within the search box. Pick any of those, and you go directly to a person, page or application, rather than overall search results. It’s easy to do this by hitting enter, so that you never get the search results at all.

If you go to the very bottom, there’s a “More Results” option as highlighted above. Click that, and a broader set of results appears: Hungover Search On Facebook

Notice on the left-hand side of the results, there are options to get results back from all these categories:

  • All Results
  • People
  • Pages
  • Groups
  • Applications
  • Events
  • Web Results
  • Posts By Friends
  • Posts By Everyone

In the search results above, you can see that “All Results” is highlighted, so I should be getting back results from all these categories. However, that’s not what happens. Instead, Facebook only brings back results from matching Pages, Posts By Friends and Web Results. That’s it.

(This, by the way, is just one example of why I often joke to people who warn that Facebook will beat Google in search that Facebook has enough problems searching Facebook itself, much less the entire web.)

Now look what happens if I drill in to the “Posts By Everyone” category:

Hungover Search On Facebook

Suddenly I see what Facebook failed to show me before, all the people on Facebook telling the world about their hangovers.

Sharing Hangovers On Facebook & Twitter

Do these people all mean to share this way? Well, it’s not like people on Twitter don’t share about having hangovers:

Hungover Search On Twitter

They key difference between Facebook and Twitter is that at Twitter, by default you’re sharing with the world. At Facebook, the default for updates is to share only with your friends.

In other words, post to Twitter, and most people probably realize they’re telling something to the world. Post at Facebook, and many people might think they’re only sharing with their friends.

Facebook’s Warnings About Sharing To The World

Indeed, Facebook deserves credit in really making you jump through hoops before you can share an update to the world. For example, here’s what you get in a brand new account, before you’ve ever even posted something:

Facebook Update Privacy Warning

That links over to a privacy FAQ page, and the only way the message disappears is if you manually click to close it. If you don’t close it, the message reappears each time you come back to the status area.

Beyond that, if you make an update and change from the default “Only Friends” option:

Sharing With Friends Facebook Update Setting

To the “Everyone” option, you get another warning:

Facebook Update Privacy Warning

After doing a post to everyone, your default remains stuck on “Only Friends.” Facebook doesn’t shift it to “Everyone,” something it could do if it wanted to try and get people to be more public about what they’re sharing, something many people — including myself — suspect them of wanting to do.

You Hide Your Profile, But Not Your Updates?

So why would I think some people don’t understand the Facebook privacy settings, when it comes to updates, especially when they have so many hoops to jump through?

Consider a search for hate my boss. I’m not going to put up a screenshot, because I don’t want to immortalize anyone and get them in trouble. But do that search, and you get posts like:

hate my job. hate my boss.

i hate my job. talked to my managers and boss didn’t help and made it worse

Do people saying these things realize that their bosses might also see the updates? To test, I went to the profiles of 10 people who each appeared in that “hate my boss” search. Here’s what I saw for all 10 of them (I’ve blanked out the name for the example shown):

Profile Sharing Message

The message tells me that this person is sharing only some of his info with everyone, right? And yet, I can see their updates. In fact, if I select the “Wall” tab, I see all their updates nicely displayed. If someone’s boss found them by name on Facebook — which isn’t hard to do — they could do the same.

Why would all these people who keep their profiles locked down still share updates? One issue might be that by default, Facebook displays the “this person shares only some things” message to anyone who isn’t someone’s friend, because chances are everyone has some tiny bit of information that by default isn’t shared on Facebook.

Facebook’s Mobile Free-For-All

Another reason is mobile. I fired up the Facebook application for the iPhone. There’s a big “What’s on your mind” box that appears at the top. Enter something, like “I hate my boss,” and that message goes to your Wall — and to the world.

Unlike Facebook itself, there are no privacy settings that I can find in the application, no share with “Only Friends” choice. If you share via the iPhone — and perhaps other mobile devices — you share with the world. That’s also true if you use Facebook’s mobile site on the web. There’s no option there other than to share with the world.

Going back to those 10 people I reviewed? I can also see that 6 of them in the search results are tagged as sharing “via the Mobile Web.” In contrast, for 10 people I looked at who said hate my boss on Twitter, only one seemed to do it via mobile.

Maybe some of those people on Facebook didn’t mean for their updates to go public. Or, maybe they’re just stupid or don’t care. I can’t fault Facebook for how it handles things on its full web site, in terms of highlighting privacy issues with updates. On the mobile front, they look to be screwing up big time.

Advice For The Concerned

By the way, as Facebook’s privacy issues ramp up, I read about more and more people wondering if they should cancel their Facebook accounts. I went through a similar struggle last December (see Now Is It Facebook’s Microsoft Moment?). As a marketer, I ultimately decided I still needed to be on the Facebook platform. But I also shifted to primarily sharing information through my fan page, where everything is public, by default.

I highly recommend fan pages to anyone. It may be a way for you to feel you have more control on Facebook at a time when it’s difficult to understand what Facebook is likely to change next. Don’t be put off on the weirdness of having a “fan” page. Just think of it as a way to have a place on Facebook where you know everything is public, a constant reminder that what you say is being said to the world overtly — rather than a constant fear that what you say or do might get shared to the world without you realizing that.

Alternatively, just assume that all you do on Facebook is public, that there is no privacy. Make that assumption, and you’ll be relatively safe — assuming that apps don’t start tracking all your web surfing habits and reporting back to the Facebook mothership or the world. To be really safe, always log out of Facebook.

Advice For Facebook

To Facebook, my advice is more blunt. Get your shit together. Enough explanations that the web is more comfortable being public or everyone has “granular” privacy controls and other platitudes. Each day, there seems to be some worry — just do a search for Facebook on Techmeme for a summary.

This week, we’ve had everything from private chats being exposed to applications that add themselves to your profile. Today, it’s how people might be sharing to the world stuff they believe is private through your mobile applications.

Someone over there, anyone — stand up and scream that your company is screwing up big time on the privacy front. You keep getting away with it so far, but that might not continue.


Reblog this post [with Zemanta]

Facebook Adding Location Features This Month

Check.inImage by habi via Flickr

Information has leaked that Facebook is set to roll out location-based features for users and brands as soon as this month. According to Advertising Age, users could see location options any day now.

These features include the ability to check in at various locations, including retail spots and restaurants. We’re unclear as to whether users will be able to add or customize their own locations, but we are fairly positive that this move will put Foursquare, Brightkite, Gowalla and other location-based services in an uncomfortable position.


Meaning for Users


The ability to check in to different locations is, as we’ve reported previously, a game-changing feature for Facebook. Foursquare, Gowalla, Brightkite and other startups that specialize in location-based features and services — and that often take checks from corporations for branded integrations — might have trouble competing with a Goliath like Facebook if the push toward checkins continues. Facebook has the userbase and mainstream adoption to bring location-sharing tools to a huge audience, excluding these newer competitors from the market. And if the company is rolling out features now, that likely means an acquisition is not likely, either.

If this feature does indeed roll out soon to end users, it also brings with it another round of privacy concerns. It’s clear that not all users understand the risks of public sharing or how to protect their likes, groups and updates. When they risk exposing their locations to the general populace, another layer of security precautions (along with the usual media FUD) is sure to follow.


Meaning for Brands


Facebook, Inc.Image via Wikipedia

McDonald’s will be the first brand to test the new features. The McDonald’s integration will involve users checking in at McDonald’s restaurants and showing featured food items in their posts. Digital advertising and marketing shops around the country are preparing to construct campaigns around this new functionality.

It’s interesting to note that this move further puts Facebook into competition with Google for local advertising dollars. Being able to target users geographically as well as demographically gives hyperlocal advertisers an edge and might cut into Google’s most profitable revenue stream.

Reblog this post [with Zemanta]

Blogger Buster - eBooks, Updated Links

As many Blogger Buster readers have reported, the links to download my eBooks had become corrupted during my extended hiatus.

Today I have uploaded both The Blogger Template Book and The Cheats' Guide to Customizing Blogger templates to Google Documents, a relocation I hope will prove far more reliable than my previous hosting solution!

I've updated the download links for these eBooks and hope everyone can now read and download these resources without any problems.

It's been a long time since these eBooks were released, so here's a quick overview of both offerings for new readers and those who may have missed the documentation upon their initial releases:



The Blogger Template Book

Choosing and using a new template for your Blogger powered blog can be quite a daunting task. What is the best method for installation? How can you choose the design and layout most suitable for your blog?

The Blogger Template Book is a free complete guide to Blogger templates, from choosing the best layout and design to suit your content, to foolproof installation methods and optimization.

Learn more »

Download The Blogger Template Book (PDF, 114 pages)


The Cheats' Guide to Customizing Blogger Templates

Originally published back in January 2008, this eBook will offer you many options and examples to customize your existing Blogger template quickly and easily, whichever style of template you are using!

Admittedly some of the techniques described in this eBook are outdated in lieu of Blogger's new Template Designer, though I hope many will still find this useful for learning the basics of Blogger template customization.

Learn more »

Download The Cheats' Guide to Customizing Blogger Templates (PDF, 53 pages)


A new eBook in progress...

I'm currently working on a new premium eBook for Blogger users which I hope to release in the summer.

The Blogger Book will provide a comprehensive guide to using Blogger's publishing service: from initial creation through to designing custom-made templates and hosting an entire website on Blogspot, this guide will help you learn tips and techniques for developing a truly amazing Blogger-based site.

Stay tuned by subscribing to the RSS feed or the new email newsletter and you'll receive updates (plus sneak previews) of the forthcoming book in addition to our regular content.

Like to share?

You may also like to subscribe to Blogger Buster's RSS feed or receive free email updates of our latest posts.


Reblog this post [with Zemanta]

May 2, 2010

A Sea of History - Twitter at the Library of Congress - NYTimes.com

Seal of the United States Library of Congress....Image via Wikipedia

TWITTER users now broadcast about 55 million Tweets a day. In just four years, about 10 billion of these brief messages have accumulated.

Not a few are pure drivel. But, taken together, they are likely to be of considerable value to future historians. They contain more observations, recorded at the same times by more people, than ever preserved in any medium before.

Twitter is tens of millions of active users. There is no archive with tens of millions of diaries,” said Daniel J. Cohen, an associate professor of history at George Mason University and co-author of a 2006 book, “Digital History.” What’s more, he said, “Twitter is of the moment; it’s where people are the most honest.”

Last month, Twitter announced that it would donate its archive of public messages to the Library of Congress, and supply it with continuous updates.

Several historians said the bequest had tremendous potential. “My initial reaction was, ‘When you look at it Tweet by Tweet, it looks like junk,’ said Amy Murrell Taylor, an associate professor of history at the State University of New York, Albany. “But it could be really valuable if looked through collectively.”

Ms. Taylor is working on a book about slave runaways during the Civil War; the project involves mountains of paper documents. “I don’t have a search engine to sift through it,” she said.

The Twitter archive, which was “born digital,” as archivists say, will be easily searchable by machine — unlike family letters and diaries gathering dust in attics.

Image representing Twitter as depicted in Crun...Image via CrunchBase

As a written record, Tweets are very close to the originating thoughts. “Most of our sources are written after the fact, mediated by memory — sometimes false memory,” Ms. Taylor said. “And newspapers are mediated by editors. Tweets take you right into the moment in a way that no other sources do. That’s what is so exciting.”

Twitter messages preserve witness accounts of an extraordinary variety of events all over the planet. “In the past, some people were able on site to write about, or sketch, as a witness to an event like the hanging of John Brown,” said William G. Thomas III, a professor of history at the University of Nebraska-Lincoln. “But that’s a very rare, exceptional historical record.”

Ten billion Twitter messages take up little storage space: about five terabytes of data. (A two-terabyte hard drive can be found for less than $150.) And Twitter says the archive will be a bit smaller when it is sent to the library. Before transferring it, the company will remove the messages of users who opted to designate their account “protected,” so that only people who obtain their explicit permission can follow them.

A Twitter user can also elect to use a pseudonym and not share any personally identifying information. Twitter does not add identity tags that match its users to real people.

Each message is accompanied by some tidbits of supplemental information, like the number of followers that the author had at the time and how many users the author was following. While Mr. Cohen said it would be useful for a historian to know who the followers and the followed are, this information is not included in the Tweet itself.

But there’s nothing private about who follows whom among users of Twitter’s unprotected, public accounts. This information is displayed both at Twitter’s own site and in applications developed by third parties whom Twitter welcomes to tap its database.

Alexander Macgillivray, Twitter’s general counsel, said, “From the beginning, Twitter has been a public and open service.” Twitter’s privacy policy states: “Our services are primarily designed to help you share information with the world. Most of the information you provide to us is information you are asking us to make public.”

Mr. Macgillivray added, “That’s why, when we were revising our privacy policy, we toyed with the idea of calling it our ‘public policy.’ ” He said the company would have done so but California law required that it have a “privacy policy” labeled as such.

Even though public Tweets were always intended for everyone’s eyes, the Library of Congress is skittish about stepping anywhere in the vicinity of a controversy. Martha Anderson, director of the National Digital Information Infrastructure and Preservation Program at the library, said, “There’s concern about privacy issues in the near term and we’re sensitive to these concerns.”

The library will embargo messages for six months after their original transmission. If that is not enough to put privacy issues to rest, she said, “We may have to filter certain things or wait longer to make them available.” The library plans to dole out its access to its Twitter archive only to those whom Ms. Anderson called “qualified researchers.”

BUT the library’ s restrictions on access will not matter. Mr. Macgillivray at Twitter said his company would be turning over copies of its public archive to Google, Yahoo and Microsoft, too. These companies already receive instantaneously the stream of current Twitter messages. When the archive of older Tweets is added to their data storehouses, they will have a complete, constantly updated, set, and users won’t encounter a six-month embargo.

Google already offers its users Replay, the option of restricting a keyword search only to Tweets and to particular periods. It’s quickly reached from a search results page. (Click on “Show options,” then “Updates,” then a particular place on the timeline.)

A tool like Google Replay is helpful in focusing on one topic. But it displays only 10 Tweets at a time. To browse 10 billion — let’s see, figuring six seconds for a quick scan of each screen — would require about 190 sleepless years.

Mr. Cohen encourages historians to find new tools and methods for mining the “staggeringly large historical record” of Tweets. This will require a different approach, he said, one that lets go of straightforward “anecdotal history.”

In the end, perhaps quality will emerge from sheer quantity.

Randall Stross is an author based in Silicon Valley and a professor of business at San Jose State University. E-mail: stross@nytimes.com.


Reblog this post [with Zemanta]

Gutenberg 2.0 | Harvard Magazine May-Jun 2010

Photograph by Jim Harrison

Nearly half of Harvard’s collection is housed at the Harvard Depository, a marvel of efficient off-campus storage. Library assistant Carl Wood reshelves books in the 30-foot-high, 200-foot-long stacks.



“Throw it in the charles,” one scientist recently suggested as a fitting end for Widener Library’s collection. The remark was outrageous—especially at an institution whose very name honors a gift of books—but it was pointed. Increasingly, in the scientific disciplines, information ranging from online journals to databases must be recent to be relevant, so Widener’s collection of books, its miles of stacks, can appear museum-like. Likewise, Google’s massive project to digitize all the books in the world will, by some accounts, cause research libraries to fade to irrelevance as mere warehouses for printed material. The skills that librarians have traditionally possessed seem devalued by the power of online search, and less sexy than a Google query launched from a mobile platform. “People want information ‘anytime, anyplace, anywhere,’” says Helen Shenton, the former head of collection care for the British Library who is now deputy director of the Harvard University Library. Users are changing—but so, too, are libraries. The future is clearly digital.

Photograph by Jim Harrison

Isaac Kohane, director of the Countway Medical Library, sees librarians returning to a central role in medicine as curators of databases and as teachers of complex bioinformatics search techniques.

Yet if the format of the future is digital, the content remains data. And at its simplest, scholarship in any discipline is about gaining access to information and knowledge, says Peter Bol, Carswell professor of East Asian languages and civilizations. In fields such as botany or comparative zoology, researchers need historical examples of plant and animal life, so they build collections and cooperate with others who also have collections. “We can call that a museum of comparative zoology,” he says, “but it is a form of data collection.” If you study Chinese history, as Bol does, you need access to primary sources and to the record of scholarship on human history over time. You need books. But in physics or chemistry, where the research horizon is constantly advancing, much of the knowledge created in the past has very little relevance to current understanding. In that case, he says, “you want to be riding the crest of the tidal wave of information that is coming in right now. We all want access to information, and in some cases that will involve building collections; in others, it will mean renting access to information resources that will keep us current. In some cases, these services may be provided by a library, in others by a museum or even a website.”

Meanwhile, “Who has the most scientific knowledge of large-scale organization, collection, and access to information? Librarians,” says Bol. A librarian can take a book, put it somewhere, and then guarantee to find it again. “If you’ve got 16 million items,” he points out, “that’s a very big guarantee. We ought to be leveraging that expertise to deal with this new digital environment. That’s a vision of librarians as specialists in organizing and accessing and preserving information in multiple media forms, rather than as curators of collections of books, maps, or posters.”

Librarians as Information Brokers

Bol is particularly interested in the media form known as Google Book Search (GBS). The search-engine giant is systematically scanning books from libraries throughout the world in order to assemble an enormous, Internet-accessible digital library: at 12 million books, its collection is already three-quarters the size of Harvard’s. Soon it will be the largest library the world has ever known. Harvard has provided nearly a million public domain (pre-1923) books for the project; by participating, the University helped with the creation of a new tool (GBS) for locating books that is useful to people both at Harvard and around the world. And participation made the full text content of these books searchable and available to everyone in the United States for free.

GBS appeals to Bol and other scholars because it gives them quick and easy access to books that Harvard does not own (litigation over the non-public-domain works in GBS notwithstanding). For Bol, such a tool might be especially useful: Harvard acquires only 15,000 books from China each year, but he estimates that it ought to be collecting closer to 50,000. So GBS could be a boon to scholarship.

But GBS also raises all kinds of questions. If everything eventually is available at your fingertips, what will be the role of libraries and librarians?

Internet search engines like Google Books fundamentally challenge our understanding of where we add value to this process,” says Dan Hazen, associate librarian of collection development for Harvard College. Librarians have worked hard to assemble materials of all kinds so that it is “not a random bunch of stuff, but can actually support and sustain some kind of meaningful inquiry,” he explains. “The result was a collection that was a consciously created, carefully crafted, deliberately maintained, constrained body of material.”

Internet search explodes the notion of a curated collection in which the quality of the sources has been assured. “What we’re seeing now with Google Scholar and these mass digitization projects, and the Internet generally,” says Hazen, “is, ‘Everything’s out there.’ And everything has equal weight. If I do a search on Google, I can get a scholarly journal. I can get somebody’s blog posting….The notion of collection that’s implicit in ‘the universe is at my fingertips’ is diametrically opposed, really, to the notion of collection as ‘consciously curated and controlled artifact.’” Even the act of reading for research is changed, he points out. Scholars poring through actual newspapers “could see how [an item] was presented on the page, and the prominence it had, and the flow of content throughout a series of articles that might have to do with the same thing—and then differentiate those from the books or other kinds of materials that talked about the same phenomenon. When you get into the Internet world, you tend to get a gazillion facts, mentions, snippets, and references that don’t organize themselves in that same framework of prominence, and typology, and how stuff came to be, and why it was created, and what the intrinsic logic of that category of materials is. How and whether that kind of structuring logic can apply to this wonderful chaos of information is something that we’re all trying to grapple with.”

How does searching digitally in a book relate to the act of reading? “There may be a single fact that’s important,” Hazen explains. “Is the book’s overall argument something that’s equally important as the single fact or is it just irrelevant? When people worry about reading books online, part of the worry is that the nuances of a well-developed argument that goes on linearly for 300 pages [are missing]. That’s not the way you interact with a text online.” How the flood of information from digitized books will be integrated into libraries, which have a separate and different, though not necessarily contradictory, logic remains to be seen. “For librarians, and the library, trying to straddle these two visions of what we’re about is something that we’re still trying to figure out.”

Photograph by Jim Harrison

The printed book took hundreds of years to replace handwritten manuscripts, which persisted as an economical way to produce small numbers of copies into the nineteenth century, nearly 400 years after Gutenberg invented movable type. Robert Darnton, director of the University Library, shown with Diderot’s Encyclopédie, predicts great longevity for the book.

Moreover, the prospect that, increasingly, libraries will be stewards of vast quantities of data, a great deal from books, and some unique, raises very serious concerns about the long-term preservation of digital materials. “What worries us all,” says Nancy M. Cline, Larsen librarian of Harvard College, “is that we really haven’t tested the longevity for a lot of these digital resources.” This is a universal problem and the subject of much international attention and research. “If you walk into the book stacks,” she points out, “you can simply smell in some areas the deterioration of the paper and leather. But with something that hums away on a server, we don’t have the same potential to observe” (see “Digital Preservation: An Unsolved Problem,” page 82).

Despite these caveats, Bol’s vision of future librarians as digital-information brokers rather than stewards of physical collections is already taking shape in the scientific disciplines, where the concerns raised by Hazen are less important. In fields faced with information overload—such as biology, coping with a barrage of genomic data, and astronomy, in which an all-sky survey telescope can generate a terabyte of data in a single night—the torrents of raw information are impossible to absorb and understand without computational aids.

Medicine has had to cope with this problem ever since nineteenth-century general practitioners found they could no longer keep up with the sheer quantity of published medical literature. Specialization eventually allowed doctors to focus only on the journals in their particular area of expertise. Throughout such transitions, libraries played an important role. Doctors, upon completing their rounds, would comb the stacks for records of similar cases that might help with diagnosis and treatment. Today, the amount of new information being generated in the biological sciences is prescribing another momentous shift that may provide a glimpse of the future in other disciplines. For a doctor, learning about a genetic test and then interrogating a database to understand the results could save a life. For libraries and librarians, the new premium on skills they have long cultivated as curators, preservers, and retrievers of collective knowledge puts them squarely on top of an information geyser in the sciences that could reshape medicine.

Mining the Bibliome

Isaac Kohane, director of the Countway Library at Harvard Medical School (HMS), recently asked a pointed question on his blog: Who is the better doctor—the one who can remember more diagnostic tests or the one “who is the quickest and most savvy at online searching for the relevant tests?” He predicts that “we are going to be uncomfortable with some of the answers to these questions for many years to come” because success based not on bedside manner, but on competence interacting with a database, implies a potential devaluing of skills that society has honored. And who is pondering these issues most acutely? A blogging librarian and pediatric endocrinologist with a Ph.D. in computer science.

One hundred years ago, says Kohane, a report on medical education in the United States concluded that doctors were inadequately prepared to care for patients. Half the medical schools in the country closed. “I think we are at a similar inflection point,” he says. “If you look at bioinformatics and genetics, you see vivid examples—which can be generalized to other parts of medicine—where the system has inadequately educated and empowered its workers in the use of search, electronic resources, and automated knowledge management.” Genetic testing, he adds, offers a “prismatic example”: studies in the Netherlands and the United States have shown that “physicians are ordering genetic tests because patients are asking them to, [even though] they don’t know how to interpret the tests and are uncomfortable doing so.”

Kohane sees similar problems when making the rounds with medical students, fellows, and residents: “When we run into a problematic complex patient with a clearly genetic problem from birth, and I ask what the problem might be and what tests are to be ordered, their reflex is either to search their memories for what they learned in medical school or to look at a textbook that might be relevant. They don’t have what I would characterize as the ‘Google reflex,’ which is to go to the right databases to look things up.” The students doubtless use Google elsewhere in their lives, but in medicine, he explains, “the whole idea of just-in-time learning and using these websites is not reflexive. That is highly troublesome because the time when you could keep up even with a subspecialty like pediatric neurosurgery by reading a couple of journals is long, long gone.”

The journals themselves have grown in number and quantity of articles, but “the amount of data being produced and analyzed in large, curated databases,” Kohane says, “exceeds by several orders of magnitude what appears in printed publications.” The fact that students and doctors don’t think to use this digital material is an international problem. “Even at Harvard,” where “we spend millions of dollars” annually for access to the databases, “many of the medical staff, graduate students, and residents don’t know how to use…,” he pauses. “Well, it’s worse than that. They don’t know that they exist.”

But in this lamentable situation Kohane sees an opportunity for medical libraries, whose role, he believes, had faded for a while. “It is becoming so clear that medicine and medical research are an information-processing enterprise, that there’s an opportunity for a library that would embrace that as a mission…to be again a center of the medical enterprise.”

Kohane has sought to do just that by creating an information institute—an HMS-wide center for biomedical informatics—embedded within Countway Library. The institute offers voluntary mini-courses, invariably oversubscribed, explaining what the relevant databases are, how to plumb them, and how to analyze the data they produce. A parallel effort under his supervision seeks to “mine the bibliome”—the totality of the electronically published medical literature—by allowing researchers to track down relationships between genes and diseases in the published literature that would not be apparent when searching one reference at a time. Librarians in the institute also comb databases for contradictions, and find references to sites in the genome that can’t possibly exist because the coordinates are wrong. In making sure that information is good, the library is “returning to its original mission of curation,” says Kohane, “but in a genomic era and around bioinformatics.” This defines a new role for librarians as database experts and teachers, while the library becomes a place for learning about sophisticated search for specialized information.

Such skills-based teaching, learning, and data curation depend on finding individuals who are trained in medicine and also have the public-minded qualities of a librarian—rare indeed, as Kohane readily acknowledges. And even though the cost of such bioinformatics education is small relative to the millions of dollars spent on subscription fees for electronic periodicals (the price of which doubled between 2000 and 2010, says Kohane; see “Open Access,” May-June 2008, page 61 for more on the crisis in scholarly communication), the resources to provide more educational support for complex types of database search training are insufficient across the University. “That’s because we are trying to bolt on a solution to a problem that probably should be addressed foursquare within the core educational process,” he says.

There is growing awareness of the need to have an “information-processing approach to medicine baked into the core education of doctoral and medical students.” Otherwise, Kohane says, “we’re condemning them to perpetual partial ignorance.” Already, a few lectures on the topic are being introduced into the medical-school curriculum, making HMS a pioneer in this area. Discussions about bringing more of the biological/biomedical informatics agenda to the undergraduate campus are also under way.

Even in the relatively tradition-bound profession of law, digitization cuts so deeply that when Ess librarian and professor of law John G. Palfrey VII restructured the Law School library last year, he says he thought about the mission less as “How do we build the greatest collection of books in law?” and more as “How do we make information as useful as possible to our community now and over a long period of time?”

This focus on information services within a community guided both personnel decisions and collections strategies. “We scrapped the entire organizational structure,” reports Palfrey (whose digital genes can be traced back to his former position as executive director of the law school’s Berkman Center for Internet and Society). Last June 30, all the librarians handed in resignations for the jobs they had held and received new assignments. There is now a librarian who works with faculty members, teaching empirical research methods, and another who helps students and faculty conduct empirical research. The collection development group includes “a lab for hacking a library”: a member of that team is working on an idea called “Stack View” that would allow the re-creation of serendipitous browsing in a digital format. Technology “allows you to reorganize information and present it in a totally different way,” Palfrey points out.

The law library’s new collection-development policy is organized along a continuum of materials for which the library takes increasing responsibility. These range from resources in the public domain that aren’t collected, but to which the library provides access; to materials accessed under license; and all the way up to unique holdings of an historic or special nature that the library archives, preserves, and may one day digitize in order to provide online access. The fact that the library no longer buys everything published in the law has been made explicit. “It is no longer possible financially, nor is it desirable—not all of it is useful,” Palfrey says bluntly. Only a third of newly purchased books are initially bound. “We’ll put a barcode on it, put it on the shelf, and see if people use it,” he explains. “If they do, and the book starts to wear, then we’ll send it to the bindery.”

Even though these changes may seem like cutbacks (they were in fact planned and in process before the University’s financial crisis became apparent), he believes skilled librarians are in no danger of becoming obsolete: “The role of the librarian is much greater in this digital era than it has ever been before.” Good lawyers need to be good at information processing, and Palfrey found in research for his book Born Digital that students today are not very good at using complex legal databases. “They try to use the same natural-language search techniques” they learned from using Google, he says, rather than thinking about research as “a series of structured queries. It’s not that we don’t need libraries or librarians,” he continues, “it’s that what we need them for is slightly different. We need them to be guides in this increasingly complex world of information and we need them to convey skills that most kids actually aren’t getting at early ages in their education. I think librarians need to get in front of this mob and call it a parade, to actually help shape it.”

Mary Lee Kennedy, executive director of knowledge and library services at Harvard Business School, whose very title suggests a new kind of approach, agrees with Palfrey. “The digital world of content is going to be overwhelming for librarians for a long time, just because there is so much,” she acknowledges. Therefore, librarians need to teach students not only how to search, but “how to think critically about what they have found…what they are missing… and how to judge their sources.”

Her staff offers a complete suite of information services to students and faculty members, spread across four teams. One provides content or access to it in all its manifestations; another manages and curates information relevant to the school’s activities; the third creates Web products that support teaching, research, and publication; and the fourth group is dedicated to student and faculty research and course support. Kennedy sees libraries as belonging to a partnership of shared services that support professors and students. “Faculty don’t come just to libraries [for knowledge services],” she points out. “They consult with experts in academic computing, and they participate in teaching teams to improve pedagogy. We’re all part of the same partnership and we have to figure out how to work better together.”

Photograph by Jim Harrison

“A man will turn over half a library to make one book,” said Samuel Johnson. Nancy Cline, Larsen librarian of Harvard College, displays a manuscript letter from the Hyde Collection of Dr. Samuel Johnson; all its Johnson letters are available online as part of the University Library’s open-collections program.

“Just in Time” Libraries

All this is not to suggest that the traditional role of libraries as collections where objects are stored, preserved, and retrieved on request is going away. But it is certainly changing. Two facilities—one digital, the other analog—suggest a bifurcated future. The two could not be more different, though their mandates are identical.

In Cambridge, the Digital Repository Service (DRS) is a rapidly growing, 109-terabyte online library of 14 million files representing books, daguerreotypes, maps, music, images, and manuscripts, among other things, all owned by Harvard. In a facility that also serves other parts of the University, a two-person command center monitors more than a hundred servers. Green lights indicate all is well; red flashes when environmental conditions such as temperature or humidity exceed designated parameters. In a nearby room, warm and alive with the whirr of hundreds of cooling fans, their cumulative sound resembling the roar of a giant waterfall, a handful of servers hold the library’s entire digital collection. Other servers are dedicated to “discovery,” the technical term for the searchable online catalog, or “delivery,” the act of providing a file to an end user.

There are at least three copies of the entire repository—one in, and two outside of, Cambridge. One of them, secured by thumbprint access, is constantly being read by machines at the disk level to ensure the integrity of the data, a process that takes a full month to complete. “Several times a year,” says Tracey Robinson, who heads the library’s office for information systems, “we detect data that have become corrupted. We engage in a constant process of refreshing and making sure that everything is readable.” Any damaged material is quickly replaced with another copy from the backup.

The analog counterpart to the DRS is the Harvard Depository (HD), located in the countryside about 45 minutes from Boston. A low, modular building with loading dock bays, it resembles a warehouse more than anything else. In many ways, that is precisely what it is. Just two librarians oversee 7.5 million books held in an energy-efficient, climate-controlled environment—more than twice as many as are held at Widener, which is three times as large. “The libraries based in the city are among the most expensive in terms of linear capacity,” says Nancy Cline. “The Depository as a concept is absolutely essential for us.” A number of other libraries in the Boston area, including MIT, use the HD. The facility absorbs half a million new books each year, circulates 220,000, and boasts a 100 percent retrieval rate. (In 24 years, just two books could not be found for delivery; in a typical library, one study showed, patrons find what they are looking for only 50 percent of the time.)

The secret to the HD’s extraordinary density and retrieval rate is simple: here, a book is not a book. Titles, subjects, authors—none of this so-called “metadata,” the information people typically use to find things, matters. “We know how many books we get in,” says assistant director of the University Library for the Harvard Depository Tom Schneiter, who directs the facility, “but we don’t know what they are. To us, they are just barcodes. It makes our work much more efficient.” A staff of dedicated workers, who rotate through different tasks in order to break up the routine, can check in as many as 800 barcodes an hour. All the items are sorted and shelved according to size in bins that are themselves barcoded. This allows the height of the shelves to be perfectly calibrated to the height of the books; no wasted airspace. Place a request for one of the books in the HD and it will be delivered the next business day to the campus library of your choosing.

Originally, the HD was intended to store only low-circulation items. But because the libraries of the Cambridge campus are “full to bursting,” says Pforzheimer University Professor Robert Darnton, the director of the Harvard University library, “doing triage” on thousands of little-used books from the shelves each year to make room for new ones proved impractical. Now, most new books are simply sent to the HD. Although some professors lament the death of shelf-browsing, others are grateful when a book they love is sent off, because they know that when next they want it, not only will it be found, it will be well-preserved: time essentially stands still for the books at the HD, where an environment set at 50 degrees and 35 percent relative humidity is expected to maintain a book in the condition in which it arrived for 244 years.

The price of such longevity and retrievability is about 30 cents per stored volume per year, which compares favorably to the cost of digital storage; expense estimates from the HathiTrust (a national group of research libraries that have created a joint repository for digital collections) for storing a digital book scanned by Google range from 15 cents for black and white to 40 cents for color annually. Actually delivering a physical book from the HD, on the other hand, costs $2.15—much more than the delivery of a digital book to a computer screen.

But making comparisons between digital and analog libraries on issues of cost or use or preservation is not straightforward. If students want to read a book cover to cover, the printed copy may be deemed superior with respect to “bed, bath and beach,” John Palfrey points out. If they just want to read a few pages for class, or mine the book for scattered references to a single subject, the digital version’s searchability could be more appealing; alternatively, students can request scans of the pages or chapter they want to read as part of a program called “scan and deliver” (in use at the HD and other Harvard libraries) and receive a link to images of the pages via e-mail within four days.

One can imagine a not too radically different future in which patronless libraries such as the DRS and the HD would hold almost everything, supplying materials on request to their on-campus counterparts. Print on demand technology (POD) would allow libraries to change their collection strategies: they could buy and print a physical copy of a book only if a user requested it. When the user was done with the book, it would be shelved. It’s a vision of “doing libraries ‘just in time’ rather than ‘just in case,’” says Palfrey. (At the Harvard Book Store on Massachusetts Avenue, a POD machine dubbed Paige M. Gutenborg is already in use. Find something you like in Google’s database of public-domain books—perhaps one provided by Harvard—and for $8 you can own a copy, printed and bound before your wondering eyes in minutes. Clear Plexiglas allows patrons to watch the process—hot glue, guillotine-like trimming blades, and all—until the book is ejected, like a gumball, from a chute at the bottom.)

Indeed, the HD might one day play a role as the fulcrum for “radical collaboration” with the five other law libraries in the Boston area, says Palfrey. “We’re asking, ‘Could we imagine deciding, as a group of six, that we’re actually going to buy something and put it in the Harvard Depository,’” a central location from which the physical book could be delivered to any institution? “It would cost us a sixth as much.” Other Harvard libraries could explore the same strategy.

That doesn’t mean Harvard’s campus libraries would become less important. Because they are embedded in the residential academic community, they remain integral to University life. Students (and faculty members) are big users of the physical spaces in libraries, though they are using them differently than in the past.

“Libraries are not conservative places anymore,” says Cline. “From the user perspective, it is an interesting time. Some people still want the quiet, elegant reading room. Others would be frustrated if they had to be quiet in every part of the buildings, in part because their work requires that they talk, that they work in collaborative teams, that they share some of their research strategies. We’re rethinking the physical spaces to accommodate more of the type of learning that is expected now, the types of assignments that faculty are making, that have two or three students huddled around a computer working together, talking.”

Libraries are also being used as social spaces, adds Helen Shenton, where people can “get a cup of coffee, connect to WiFi, and meet their friends” outside their living space. In terms of research, students are asking each other for information more now than in the past, when they might have asked a librarian. “The flip side,” Shenton continues, “is that some places are embedding their library and information specialists within disciplines and within faculties. So I think the whole model is like one of those snow globes. You pick it up and shake it around and all the pieces will settle in a different way, which is incredibly exciting.”

A Future for Books

“A big misconception is that digital information and analog information are incompatible,” says Darnton, himself an historian of the book. “On the contrary, the whole history of books and communication shows that one medium does not displace another.” Manuscript publishing survived Gutenberg, continuing into the nineteenth century. “It was often cheaper to publish a book of under a hundred copies by hiring scribes,” he says, than it was to set the type and hire people to run the press. Likewise, horsepower increased in the age of railroads. “There were more horses hauling passengers in the second half of the nineteenth century than there were in the first half. And there is good evidence that now, if a book appears electronically on your computer screen, and it’s available for free, it will stimulate sales of the printed version.”

Jeffrey Hamburger—a scholar of an even earlier medium, the medieval manuscript—who was recently named chair of a library advisory group, says that “the notion that we are going to abandon the codex as we have known it—the traditional book—and go digital overnight is very misguided. It is going to be a much longer transition than anyone suspects, just as the transition in the past between the oral tradition of literature in antiquity and silent reading as we’ve known it for almost two millennia was a long transition, taking the better part of a millennium itself.”

Hamburger, the Francke professor of German art and culture, has worked extensively here and in Germany on projects involving the application of new media to the study of medieval manuscripts, but he says there are “still many, many things that new media cannot do as effectively as a good old-fashioned book”: for example, combining text and an associated image on opposing pages. “It’s instructive how many of the words we use to describe computer interfaces—tabs, bookmarks, scrolling—are derived from our experience with the book, and that’s not just because of experience or familiarity,” he adds. “It’s because they have a certain practicality, and all of those, it so happens, are inventions of the Middle Ages.” Computers, in reverting to scrolling, have “gone back to a much older technology, which had its merits but was deficient in its own ways, which is why it was replaced.”

In advocating for the continued importance of books, and raising his concern that this could become the “lost decade” for acquisitions to Harvard’s library collections, Hamburger emphasizes that he is not framing the University’s current crisis in terms of books versus new media. “We need both, and we’ll continue to need both. I think we have to take as a premise that the library is a vast, far-flung, varied institution, as varied and diverse as the intellectual community of the University itself, working for a range of constituents almost impossible to conceive of, and it’s not just a service organization. I would even go so far as to call it the nervous system of our corporate body.”

It would be a terrible mistake, Hamburger continues, “if different factions within the faculty, be it scientists and humanists, be it Western- or non-Western-focused scholars, started squabbling over resources. As a university, we have by definition a catholic, all-embracing mission, and the question is how to coordinate resources, not compete for them. The greatness of this university in the past and in the future rests on the greatness of our library. Without the library—old, new, digital, printed—this institution wouldn’t be what it is.”

Reblog this post [with Zemanta]

Apr 19, 2010

Google - Controversial Content and Free Expression on the Web

Icon for censorshipImage via Wikipedia

Two and a half years ago, we outlined our approach to removing content from Google products and services. Our process hasn’t changed since then, but our recent decision to stop censoring search on Google.cn has raised new questions about when we remove content, and how we respond to censorship demands by governments. So we figured it was time for a refresher.

Censorship of the web is a growing problem. According to the Open Net Initiative, the number of governments that censor has grown from about four in 2002 to over 40 today. In fact, some governments are now blocking content before it even reaches their citizens. Even benign intentions can result in the specter of real censorship. Repressive regimes are building firewalls and cracking down on dissent online -- dealing harshly with anyone who breaks the rules.

Increased government censorship of the web is undoubtedly driven by the fact that record numbers of people now have access to the Internet, and that they are creating more content than ever before. For example, over 24 hours of video are uploaded to YouTube every minute of every day. This creates big challenges for governments used to controlling traditional print and broadcast media. While everyone agrees that there are limits to what information should be available online -- for example child pornography -- many of the new government restrictions we are seeing today not only strike at the heart of an open Internet but also violate Article 19 of the Universal Declaration of Human Rights, which states that: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”

We see these attempts at control in many ways. China is the most polarizing example, but it is not the only one. Google products -- from search and Blogger to YouTube and Google Docs -- have been blocked in 25 of the 100 countries where we offer our services. In addition, we regularly receive government requests to restrict or remove content from our properties. When we receive those requests, we examine them to closely to ensure they comply with the law, and if we think they’re overly broad, we attempt to narrow them down. Where possible, we are also transparent with our users about what content we have been required to block or remove so they understand that they may not be getting the full picture.

On our own services, we deal with controversial content in different ways, depending on the product. As a starting point, we distinguish between search (where we are simply linking to other web pages), the content we host, and ads. In a nutshell, here is our approach:

Search is the least restrictive of all our services, because search results are a reflection of the content of the web. We do not remove content from search globally except in narrow circumstances, like child pornography, certain links to copyrighted material, spam, malware, and results that contain sensitive personal information like credit card numbers. Specifically, we don’t want to engage in political censorship. This is especially true in countries like China and Vietnam that do not have democratic processes through which citizens can challenge censorship mandates. We carefully evaluate whether or not to establish a physical presence in countries where political censorship is likely to happen.

Some democratically-elected governments in Europe and elsewhere do have national laws that prohibit certain types of content. Our policy is to comply with the laws of these democratic governments -- for example, those that make pro-Nazi material illegal in Germany and France -- and remove search results from only our local search engine (for example, www.google.de in Germany). We also comply with youth protection laws in countries like Germany by removing links to certain material that is deemed inappropriate for children or by enabling Safe Search by default, as we do in Korea. Whenever we do remove content, we display a message for our users that X number of results have been removed to comply with local law and we also report those removals to chillingeffects.org, a project run by the Berkman Center for Internet and Society, which tracks online restrictions on speech.

Platforms that host content like Blogger, YouTube, and Picasa Web Albums have content policies that outline what is, and is not, permissible on those sites. A good example of content we do not allow is hate speech. Our enforcement of these policies results in the removal of more content from our hosted content platforms than we remove from Google Search. Blogger, as a pure platform for expression, is among the most open of our services, allowing for example legal pornography, as long as it complies with the Blogger Content Policy. YouTube, as a community intended to permit sharing, comments, and other user-to-user interactions, has its Community Guidelines that define its own rules of the road. For example, pornography is absolutely not allowed on YouTube.

We try to make it as easy as possible for users to flag content that violates our policies. Here’s a video explaining how flagging works on YouTube. We review flagged content across all our products 24 hours a day, seven days a week to remove offending content from our sites. And if there are local laws where we do business that prohibit content that would otherwise be allowed, we restrict access to that content only in the country that prohibits it. For example, in Turkey, videos that insult the founder of modern Turkey, Mustafa Ataturk, are illegal. Two years ago, we were notified of such content on YouTube and blocked those videos in Turkey that violated local law. A Turkish court subsequently demanded that we block them globally, which we refused to do, arguing that Turkish law cannot apply outside Turkey. As a result YouTube has been blocked there.

Finally, our ads products have the most restrictive policies, because they are commercial products intended to generate revenue.

These policies are always evolving. Decisions to allow, restrict or remove content from our services and products often require difficult judgment calls. We have spirited debates about the right course of action, whether it’s about our own content policies or the extent to which we resist a government request. In the end, we rely on the principles that sit at the heart of everything we do.

We’ve said them before, but in these particularly challenging times, they bear repeating: We have a bias in favor of people's right to free expression. We are driven by a belief that more information means more choice, more freedom and ultimately more power for the individual.

Reblog this post [with Zemanta]

Apr 17, 2010

Web Coupons Tell Stores More Than You Realize - NYTimes.com

Huggle CouponImage by The Lightworks via Flickr

For decades, shoppers have taken advantage of coupons. Now, the coupons are taking advantage of the shoppers.

A new breed of coupon, printed from the Internet or sent to mobile phones, is packed with information about the customer who uses it. While the coupons look standard, their bar codes can be loaded with a startling amount of data, including identification about the customer, Internet address, Facebook page information and even the search terms the customer used to find the coupon in the first place.

And all that information follows that customer into the mall. For example, if a man walks into a Filene’s Basement to buy a suit for his wedding and shows a coupon he retrieved online, the company’s marketing agency can figure out whether he used the search terms “Hugo Boss suit” or “discount wedding clothes” to research his purchase (just don’t tell his fiancée).

Coupons from the Internet are the fastest-growing part of the coupon world — their redemption increased 263 percent to about 50 million coupons in 2009, according to the coupon-processing company Inmar. Using coupons to link Internet behavior with in-store shopping lets retailers figure out which ad slogans or online product promotions work best, how long someone waits between searching and shopping, even what offers a shopper will respond to or ignore.

The coupons can, in some cases, be tracked not just to an anonymous shopper but to an identifiable person: a retailer could know that Amy Smith printed a 15 percent-off coupon after searching for appliance discounts at Ebates.com on Friday at 1:30 p.m. and redeemed it later that afternoon at the store.

“You can really key into who they are,” said Don Batsford Jr., who works on online advertising for the tax preparation company Jackson Hewitt, whose coupons include search information. “It’s almost like being able to read their mind, because they’re confessing to the search engine what they’re looking for.”

Google Search Coupon: 1 FREE Google SearchImage by Bramus! via Flickr

While companies once had a slim dossier on each consumer, they now have databases packed with information. And every time a person goes shopping, visits a Web site or buys something, the database gets another entry.

“There is a feeling that anonymity in this space is kind of dead,” said Chris Jay Hoofnagle, director of the Berkeley Center for Law and Technology’s information privacy programs.

None of the tracking is visible to consumers. The coupons, for companies as diverse as Ruby Tuesday and Lord & Taylor, are handled by a company called RevTrax, which displays them on the retailers’ sites or on coupon Web sites, not its own site.

Even if consumers could figure out that RevTrax was creating the coupons, it does not have a privacy policy on its site — RevTrax says that is because it handles data for the retailers and does not directly interact with consumers. RevTrax can also include retailers’ own client identification numbers (Amy Smith might be client No. 2458230), then the retailer can connect that with the actual person if it wants to, for example, to send a follow-up offer or a thank-you note.

Using coupons also lets the retailers get around Google hurdles. Google allows its search advertisers to see reports on which keywords are working well as a whole but not on how each person is responding to each slogan.

“We’ve built privacy protections into all Google services and report Web site trends only in aggregate, without identifying individual users,” Sandra Heikkinen, a spokeswoman for Google, said in an e-mail message.

The retailers, however, can get to an individual level by sending different keyword searches to different Web addresses. The distinct Web addresses are invisible to the consumer, who usually sees just a Web page with a simple address at the top of it.

So clicking on an ad for Jackson Hewitt after searching for “new 2010 deductions” would send someone to a different behind-the-scenes URL than after searching for “Jackson Hewitt 2010,” though the Web pages and addresses might look identical. This data could be coded onto a coupon.

RevTrax works as closely with image-rich display ads, with coupons also signaling what ad a person saw and on what site.

“Wherever we provide a link, whether it’s on search or banner, that thing you click can include actual keywords,” said Rob O’Neil, director of online marketing at Tag New Media, which works with Filene’s. “There’s some trickery.”

The companies argue that the coupon strategy gives them direct feedback on how well their marketing is working.

Once the shopper prints an online coupon or sends it to his cellphone and then goes to a store, the clerk scans it. The bar code information is sent to RevTrax, which, with the ad agency, analyzes it.

“We break people up into teeny little cross sections of who we think they are, and we test that out against how they respond,” said Mr. Batsford, who is a partner at 31 Media, an online marketing company.

RevTrax can identify online shoppers when they are signed in to a coupon site like Ebates or FatWallet or the retailer’s own site. It says it avoids connecting that number with real people to steer clear of privacy issues, but clients can make that match.

The retailer can also make that connection when it is offering coupons to its Facebook fans, like Filene’s Basement is doing.

“When someone joins a fan club, the user’s Facebook ID becomes visible to the merchandiser,” Jonathan Treiber, RevTrax’s co-founder, said. “We take that and embed it in a bar code or promotion code.”

“When the consumer redeems the offer in store, we can track it back, in this case, not to the Google search term but to the actual Facebook user ID that was signing up,” he said. Although Facebook does not signal that Amy Smith responded to a given ad, Filene’s could look up the user ID connected to the coupon and “do some more manual-type research — you could easily see your sex, your location and what you’re interested in,” Mr. Treiber said. (Mr. O’Neil said Filene’s did not do this at the moment.)

The coupon efforts are nascent, but coupon companies say that when they get more data about how people are responding, they can make different offers to different consumers.

“Over time,” Mr. Treiber said, “we’ll be able to do much better profiling around certain I.P. addresses, to say, hey, this I.P. address is showing a proclivity for printing clothing apparel coupons and is really only responding to coupons greater than 20 percent off.”

That alarms some privacy advocates.

Companies can “offer you, perhaps, less desirable products than they offer me, or offer you the same product as they offer me but at a higher price,” said Ed Mierzwinski, consumer program director for the United States Public Interest Research Group, which has asked the Federal Trade Commission for tighter rules on online advertising. “There really have been no rules set up for this ecosystem.”

Reblog this post [with Zemanta]