Nature’s free-view links on PubChase


A few days ago, Nature announced that they will allow readers with subscription to share links for free viewing of their articles. Many in the media have mistaken this to be a move towards open access by the Nature Publishing Group. It is not – the links allow viewing only, without the ability to download or print the article. Michael Eisen has responded with: Is Nature’s “free to view” a magnanimous gesture or a cynical ploy?.

I agree that this could be used against open access if people mistake these view-only links as fulfilling the goals of truly open, reusable and accessible science communication. On the other hand, I know Timo Hannay, managing director of Digital Science (the sister company of NPG which was behind the push for the free-view links). I also know some of the founders of the startups supported by Digital Science, and they tend to genuinely strive for improving science with technology. Therefore, I am willing to give this new initiative some benefit of the doubt.

Digital Science is operating within the constraints of a subscription publisher. Subscription publishers generally seem to be resistant to trying anything new that might make science communication better. So I imagine that pushing through this free-link initiative was a huge struggle for Timo Hannay.

Thus, tempering our skepticism, we have enabled link-sharing on PubChase for articles in the Nature journals. Just as scientists can share these links on Twitter or over e-mail, they can easily do the same on PubChase. The added benefit is that the link is then persistent and accessible to other scientists interested in the work.

Posted in Uncategorized | Leave a comment

Save the date: participative Bay Area OA week event for Generation Open

A few local Open Access people – Liz Allen (ScienceOpen), Pete Binfield (PeerJ, with Georgina Gurnhill in the UK), Lenny Teytelman (Zappy Lab) and Laurence Bianchini (My Science Work), who are expanding to the Bay Area, welcome) – got together and brainstormed what our ideal OA week event would look like.

  • We agreed that we wanted to avoid a traditional format and so we settled on:
  • Moderated un-conference where the audience talks and asks questions
  • Simple event theme, we picked “#OpenAccess – it’s up to all of us
  • “Lightning talks” that anyone can give, 5 image slides in 5 minutes
  • Time to chat and mingle over a drink and something to eat
  • Ideally, a cool venue with great views (with Disabled Access)

Not wishing to brag (but bragging none the less!), we feel that the program below achieves our vision. We ran it past our academic partners UCSF Library (Anneliese Taylor), UAW post-doc union (Felicia Goldsmith) and they liked it too. Now all we need is for you all to save the date and make this an event to remember.

Date: Thursday, October 23rd, 2014

Venue: SkyDeck, Berkeley (one of the first research university startup accelerators)

Time: 6.00 pm – 8.30 pm

Theme: #OpenAccess – it’s up to all of us

Format (suitable for global cloning!):

8 mins – relax with a drink, a snack and “What is OA?” video by Jorge Cham (PhD Comics), Nick Shockey (Right to Research) and Jonathan Eisen (UCD)

  • 10 minutes – un-conference OA topic selection by audience
  • 20 minutes – topic discussion with moderation (your host for the evening, Lenny!)
  • 10 minutes – grab another drink (alcoholic or non), stave off hunger with nibbles
  • 40 minutes – lightning talks, “#OpenAccess – it’s up to all of us”
  • Last 30 minutes or so – greeting old friends and making some new ones

In the coming weeks, we’ll be letting you know where to send your lightning talks and the deadline for doing so, we will be taping them for social media (you have been warned). We will create an Eventbrite so you can RSVP.

Finally, in the spirit of “the more the merrier” other OA Publishers and Academic Partners who want to participate are welcome to email Liz. 

Posted in Uncategorized | Leave a comment

ZappyLab’s Guide to Crowdfunding

From February to March 15, we ran a successful Kickstarter crowdfunding campaign for – our free, up-to-date, crowdsourced repository of life science protocols. Launching and running this Kickstarter campaign was simultaneously one of the smartest and one of the hardest things we did in the two years of our startup. In this post, I would like to share our experience and insights into what it takes to pull off a campaign like this.

It seems that ours was the first crowdfunded campaign aimed specifically at scientists. Many have reached out to me since March to see if crowdfunding is a good way to validate an idea or raise funds to launch a new startup. In all such cases, my answer is an unambiguous “no”.  We had considered a Kickstarter campaign as a way to get initial funding for, way back in 2012, before we even incorporated ZappyLab. It is clear today that had we attempted it two years ago, we would have failed spectacularly (if you are too busy with your startup to read all of this, skip to the bottom for 6 specific questions that will help you to determine if it’s a good idea for you).

There is a misleading perception that Kickstarter or Indigogo is a good way to market something. It is just plain wrong, for the vast majority of possible projects.  Kickstarter is not an advertising platform. It does not promote your project and bring viewers to your page. That is your job. Kickstarter is a platform that enables crowdfunding – it provides the structure, the trust and security for the backers, the payment processing, and a flawless interface for communicating with your supporters and for promoting your project. But it is up to you to bring the backers to your page. And by the way, Kickstarter makes no secret of it; it tells every project creator very explicitly to reach out to friends and followers, making it clear that the success relies entirely on the outreach efforts of the people launching the project1.

Here are important things to keep in mind about ZappyLab’s Kickstarter. This is applicable to many crowdsoucred projects that aim at a specific niche demographic, like life science researchers, rather than the entire world population.

  • We raised half a million dollars in angel investments and have been building free and amazing tools for scientists for two years. The science community knows about us and likes what we have done and are attempting to do. We have thousands of users of PubChase and we leaned on their support repeatedly throughout the month of our campaign, asking them explicitly to back our effort.
  • I have worked very hard and have slept very little since we founded ZappyLab. But nothing in the past two years comes close to the sustained effort that was required during our Kickstarter marathon. I literally slept 3-4 hours per night for the entire duration of the campaign.
  • Serendipitously, a blog post that I wrote about academia went massively viral (100,000 views) in February and brought many visitors to our Kickstarter campaign.
  • Science companies Mendeley, Figshare, and PeerJ agreed to help us before we launched. They offered memberships for their services as rewards and they blogged and tweeted about us.
  • Hundreds of people blogged, e-mailed, tweeted and advocated tirelessly on our behalf, with an explicit call to fund our project because of what it can do for life science research.

All of us at ZappyLab are amazed, touched, and humbled by the community’s support of our project. We have recently begun to send the promised rewards to our backers. I will soon take a few days off work to bake gene-shaped cookies for the sweet-craving scientists out there. Yet, these are just tiny symbolic tokens of appreciation. I honestly do not know of a proper way to really thank everyone who supported and continues to support us. Perhaps, delivering on the promise will come close to appropriately thanking everyone.

It is not an exaggeration when I say that there is no way we could have succeeded with our campaign without the tremendous effort of the community to publicize and encourage the funding of our project. And that is the main point of this guide – Kickstarter is not a way to build the community; rather, if you have built the community, it is a way to tap into the community’s support.


Here is a list of questions you should answer before launching a crowdsourcing campaign.

  1. Do you expect Kickstarter to bring attention and visitors to your project? It might happen if they feature you on the homepage or in their e-mail to users with the “projects we love” list, but this is not something you can count on. Assume that no one except you knows that you have launched this campaign.
  2. Are you trying to fund a device or object that everyone craves? Is your project funding the creation of a product that will itself be the reward that you will send to your backers? If yes, you have a shot at going viral.
  3. If you have a nascent idea and no prototype or proof that you can deliver, assume that the only people backing you will be your friends and family. How many close relatives and friends do you personally have? If each one of them contributes $50, will that be enough?
  4. If you are trying to raise an amount that goes beyond your friends and family, how will you publicize the fact that you are running the crowdfunding project? Assume roughly a 1% conversion rate. That is, if you will need approximately 500 backers, you’ll have to somehow let 50,000 or more people know that you have launched the project.
  5. Do you already have a community of supporters likely to back you? Do you have mass media contacts, bloggers, and famous people who have promised to bring you visibility? Don’t bet on media coverage to help your Kickstarter project go viral. It works in reverse – if your project goes viral, you are likely to get media coverage. And for every article about you in a major media outlet, assume that you are likely to get only a modest bump of 50-100 backers.
  6. This may be the most important question of all – are you doing this full time? Are you going to be able to devote every waking second, for the entire duration of the campaign, to promoting this?


  1. Kickstarter 101 FAQ: “Where do backers come from?”  In most cases, the majority of funding initially comes from the fans and friends of each project. []
Posted in Uncategorized | Leave a comment

PubPeer comments on PubChase


We are very excited to announce that PubChase articles will now link to PubPeer comments.1 PubPeer is an online journal club, and PubChase users will now automatically get alerts on new discussions of papers in their libraries. As we already wrote, this is part of our philosophy that knowledge should come to you. Instead of laborious searches or random collision with information – new papers, corrections, discussions, and retractions should find you.

However, today’s announcement is not just about alert notifications; it is a step towards transforming science communication. Much of the way we do our science and communicate the results is stuck in the 18th or even 17th century. The mission of ZappyLab, the company behind PubChase, is to use technology to fundamentally shift how science knowledge is shared and how it reaches the researchers who need it. This is why we are building This is why we built PubChase, and from the very beginning, two years ago, we envisioned PubChase as a platform that would promote post-publication discussion (see our essay forum).

That is why we are so happy to make use of PubPeer’s free API to facilitate post-publication commenting. As we wrote a week ago, the current pre-publication peer review system is deeply broken, and we see PubPeer as one of the key developments in restructuring the publishing system.

We will do our best to promote constructive comments on PubPeer. We believe it will become the central point for post-publication discussion, with the main consequence of this dialogue being not retractions but improved quality of manuscripts and improved understanding of the published work.


  1. With the “pub” in both names, it was only a matter of time before we would connect. []
Posted in PubChase | Tagged , , | 4 Comments

We Can Fix Peer Review Now

Imagine a software company that solicits user feedback with, “Please let us know what does and does not work in the current release and what you would like to see in the future. However, keep in mind that we will not be making any updates to our products and the version you have is the final one.” This is the state of post-publication peer review today. We ask scientists to comment on static, final, published versions of papers, with virtually no potential to improve the articles. We ask scientists to waste their time and then take the lack of participation as evidence against post-publication peer review.

For two years now, I have heard the argument that efforts to encourage post-publication commentary have failed and therefore cannot succeed in the future. This is the classic “has not worked so far and therefore never will” mentality (just as people tell me that lack of mobile devices in the lab right now is proof that scientists will not use phones and tablets for research in the future). Proponents of post-publication versus pre-publication review are still viewed as the crazy fringe that is about to derail all that is good about science publishing.

Much has been written on the failure of the current publishing model in science12345678910. I want to focus here on the ways to incentivize post-publication peer review, and specifically, on ways to incentivize constructive criticism. By far, the best demonstration of the power and potential of post-pub review is the PubPeer website. Not only did PubPeer succeed where most journals failed – encouraging comments after publication – but the comments on their site have led to a number of high profile retractions. PubPeer is a clear demonstration of the power to catch problems that pre-publication peer review is simply incapable of flagging.

A common criticism of PubPeer is that the comments are overwhelmingly negative. However, as should be clear from the above, this is not a problem of PubPeer; this is the fault of our publishing structure. Scientists are already sleep-deprived and overwhelmed with workload. Why spend time commenting on a paper if the paper won’t improve? Naturally, the comments are those that are likely to lead to a retraction or a major correction, as these are effectively the only actions that can still be applied to a published manuscript.

The good news is that the solutions to this are already live. The journal F1000Research has broken ground with support for versions of manuscripts. Authors can return after publication and edit their papers with clearly-tracked and stamped versions (example here). This is a big deal. I have gotten countless e-mails after publishing my papers with important references that I missed and with great questions that suggested easy clarifications to the manuscripts to strengthen them. If only I could easily edit and improve my papers! As more publishers enable versioning, the incentive to provide constructive rather than destructive feedback on PubPeer will increase exponentially.

Luckily, you don’t have to wait a decade or two for other publishers to catch up to F1000 and enable versioning. Just submit your manuscript to a preprint server like bioRxiv or arXiv before you send it to a journal, and then solicit reviews from scientists and encourage them to publish these on PubPeer. Both bioRxiv and arXiv have versioning, and you can continue to improve your paper even after it is published in a traditional journal.

There are many great reasons to deposit to a pre-print server, but even if you don’t, or for papers you have previously published, you can easily contribute to productive and constructive post-publication commentary. We are constantly answering questions about our publications at seminars, conferences, and via e-mail. These discussions are so helpful if made public. By e-mail, you answer to one person, but on PubPeer, you answer 1,000. You can quickly make an FAQ section on your paper based on the questions you commonly get, or you can copy-paste entire e-mail threads (I have done just that on my recent publication). Ideally, in the future, these discussions will happen directly on PubPeer instead of privately by e-mail.

Finally, there are thousands of journal clubs happening each week with deep and careful discussions of papers. This is post-publication peer review! You spend hours preparing to present the paper. You have concerns, questions, and positive feedback. Why not share it openly or anonymously on PubPeer? After all, PubPeer calls itself the “online journal club”. PubPeer engages the authors for you so you can get clarifications and additional information. You can help other scholars interested in this work. You can help the authors to improve the understanding of their work, and if they published on a pre-print server or with a journal that has versions, possibly help the authors to improve the manuscript itself.

There is no reason to wait for publishers to innovate. With the exception of a few, innovation is neither the forte nor the goal of the publishers. As scientists, with just a few minutes of our time, we can contribute to the online annotation and discussion of published research already. We can push for constructive post-publication discussions and peer review as authors and readers. The tools are at our disposal. Let’s use them. Let’s elevate the tone of the commentary and let’s comment on the vast majority of papers that are good and not headed for Retraction Watch. If we make an effort as scientists now, we will validate the post-publication peer review naturally and will lead to a healthier scientific publishing and discourse.


[Because I am a fan of  making frequently asked questions public, below are the common motifs in defending the status quo of peer review and publishing.]

  •  Pre-pub peer review improves papers

This seems obvious on the surface. Certainly all of my manuscripts improved thanks to peer review. But by how much? And at what cost (see here  and here)? Is the 9-month average delay helping or hurting science?

I am a strong advocate for academic peer review. I don’t know anyone who argues against improving papers and quality of science through review. But why pre-publication? I think the current system does more harm than good to the quality of science. Just consider the fact that the paper does not see the light of day until the reviewers and the editor have been satisfied. This is a ludicrous level of pressure; pressure not to improve but to provide the desired results. Not only does this contribute to outright fraud, but as Arjun Raj points out, this constantly leads to inadvertent bad science.

And any good from pre-pub peer review will still be there with post-publication peer review. In fact, papers will improve more rapidly and will gain more reviews with the post-pub structure with versions. We’ll have higher quality manuscripts, faster, with fewer retractions, and fewer dubious results.

  • The current system is stretched and has weaknesses, but is it really broken? Good science still gets published.

It’s broken beyond repair. The average 9-month delay from the moment of submission to the publication is inexcusable with our current tools. We are publishing the same way Gregor Mendel did, despite the advent of computers, internet, social networks, and mobile devices. Good science gets published despite, not because of the current system. The current publishing structure is pushing people out of science. It is demoralizing, exhausting, and destructive.

Professor at University of Washington, in response to the invitation to share a story about published research on our PubChase Essays: “Thanks for the invitation to share a story [about my research].  I’ll see if I can come up with one, but to be honest, publishing has become such a war that once a paper is out, I think I try to forget the details of what went on as soon as I can.”

Professor at Brandeis, also in response to the story request: “Not sure what dirt I want to dish about my papers. I could tell how it took 6 tries to publish our recent manuscript or how other of our most frequently cited papers were rejected from various journals.”

Professor Arjun Raj: “For myself, I can say that by the time a paper comes out, I usually never want to see it again. The process just takes so long and is so painful that all the joy has long since been squeezed out of the paper itself.”

  • Pre-pub is a filter so we don’t have to read crap. Too much is published already.

It’s a bad filter. It approves bad papers and rejects good ones.

Yes, the volume of publications is overwhelming. Over 100,000 papers are deposited into PubMed each month. The solution isn’t to reject more. That just leads to delays. Most rejected papers are still published, just with a delay. Rejections are often random.  And how much more would we have to reject to make the information flow manageable? With internet and today’s technology, we now have better filtering than 300-400 years ago. The solution is to improve tools like PubChase that solve the problem of discovery via personalized recommendations. And, of course, post-pub review can serve as the same filter, only faster and better.


  1. Peer review is f***ed up – let’s fix it []
  2. Stop deifying “peer review” of journal publications []
  3. The Seer of Science Publishing []
  4. End the wasteful tyranny of reviewer experiments []
  5. Is peer review broken? []
  6. I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals []
  7. The Cost Of The Rejection-Resubmission Cycle []
  8. The gift that keeps on giving []
  9. The magical results of reviewer experiments []
  10. Dear Academia, I loved you, but I’m leaving you. This relationship is hurting me. []
Posted in Uncategorized | Tagged , | 18 Comments

Knowledge Should Come to You

The goal of ZappyLab (the company behind PubChase) is to transform information exchange between researchers. But enabling the sharing of critical information via our mobile suite and its visualization on is just half the story. The other half of our company is about how that information reaches you.

Search engines are amazing nowadays, but you often don’t know what to search for or that you should be doing the search in the first place. So for two years, we’ve been working hard to make your life easier. You shouldn’t have to search. The information that you need should come to you.

Here’s what we have done, are doing, and will do next at ZappyLab to simplify discovery of knowledge.

1. PubChase is not a search engine. You can search for papers, and our search is smart, but the reason Matt Davis conceived of PubChase is so that the papers you need will come to you. Of course, if you need a given article, you can easily find it by title via Google or PubMed. But when a new paper is published, how do you know you should be searching for it? With PubChase, you just sit back and check your recommendations, and “let the research find you.”

2. Today, we are announcing a new feature of PubChase – retraction notifications. You have a thousand papers in your library. One of them is retracted today. How do you know? Starting today, any new story on Retraction Watch will be linked to from the corresponding paper in PubChase. Most importantly, starting today, if a paper in your library is retracted and there is an announcement on Retraction Watch or PubMed, you will get a notification from PubChase right away.

3. Retractions may be rare, but modifications and corrections are a constant stream. PLOS recently announced versions of papers, and we are working on connecting that information to you, just as we have done with retractions. Of course, is all about changes, corrections, improvements. And should a protocol that you use be modified, you will get a buzz on your phone inside Bench Tools.
Science is an ever-evolving enterprise and no paper is ever final or static. We are pushing hard to facilitate the sharing of new knowledge and to ensure that this information finds you.
Posted in Uncategorized | 2 Comments

Was Henry Ford an idiot? (Will Scientists Use Mobile in the Lab?)

For two years now, ZappyLab has been building mobile tools for laboratory researchers. For two years we have heard the concern that scientists won’t adopt mobile technology for their experimental work. The skeptics’ reasons generally fall into one of the following bins:

  • contamination of the device
  • apps that have been built for scientists did not catch on
  • inconvenient to use with gloves.

I will address each of the concerns in detail below, but first I will ask you to imagine yourself in the middle of Manhattan’s Central Park around 1907. You are having a pleasant picnic with a friend who says, “Cars have no future. Just look around – do you see any cars? Nope! Only horses. Why is that? Well, because cars have no grace, no soul, no style. They are heaps of metal with deafening noise. And they cost a fortune compared to a horse. Ford is an idiot.” This is the same friend who would warn today that mobile has no chance of entering the lab.

Contamination concern: Researchers don’t want the phone to contaminate their experiments and are afraid of brining hazardous chemicals back home on their device.

We released our first iOS app, the Lab Counter, in the summer of 2012. Since then, we have launched many more on Android and iOS. Each of our apps has Google Analytics inside, so we have tons of anonymized data on downloads and usage of our tools. It is clear from our stats that researchers all over the world are not only downloading our tools, but they are using them with extraordinary engagement. In academic centers, pharmaceuticals, and biotechs, thousands of researchers are using our apps and using them on regular basis.

Undoubtedly many will not want to risk a chemical spill on their phone and will not put it on their bench. But who said that our Bench Tools suite has to be installed and used on a personal device? Consider the above-mentioned Lab Counter; it replaces a metal counter that laboratories typically purchase for as much as $1,000. It is possible to buy several iPad minis for this cost and we already know of several laboratories that have saved hundreds of dollars by using our free Lab Counter app. And the counter is just one of the many utilities inside our suite.


Scientists don’t want or need mobile: The few apps that have been made for scientists have not caught on – clearly scientists just don’t need mobile tools at the bench.

It is true that the vast majority of science apps that have been released over the past years have been total flops. No one downloads them. Zero interest. However, it is wrong to assume that the cause of this is lack of interest from scientists. As I noted above, we have clear evidence that scientists do want and do use our mobile tools at work. So why do most other apps for scientists fail?

One problem is that many of the science apps are just marketing gimmicks. Companies wrongly assumed that if you make an app for scientists, it will instantly get thousands of downloads just because it is mobile. Well, if it is a poorly-built, buggy, useless app that you did not seriously invest into, it will not do you good. It will just collect negative reviews for a few days, and then it will die with no more downloads. If you want it to be used, you have to take the user interface and development of the mobile seriously and invest a huge effort into making a candy of an app.

The second and main reason for lack of popularity, even for good science apps – there is no market yet. It seems that two years after launch, ZappyLab is still one of only two companies that is seriously building mobile tools for scientists (mobile is at the core of our development and we are on iOS and Android). Contrary to what many think, the lack of competition actually makes our job harder. It means that scientists just don’t know that mobile tools exist for them. So when a biologist comes to iTunes, she is looking for games and music, like everyone else, and not for research tools.

If you want revenue, a mobile app for scientists will give you none. And if you want to do marketing, a mobile app is a waste of money for you. The app will not promote your company; in reverse, if you want downloads, you will have to work very hard to promote and market the app.

Inconvenient to use with gloves

I have been at the bench for ten years, and much of the work is done without gloves. I am not pouring ethidium bromide while counting cells or pipetting water into cuvettes. I don’t wear gloves when I am on the microscope. And if I am handling chemicals and need to write in my notebook, I take the gloves off. Moreover, there are cheap stylus pens that can be used to tap on the tablet or phone, while wearing gloves.

The above points are a technical argument about overcoming the inconvenience of gloves and phones. The true reason why scientists will use mobile in the lab is that the phones and tablets of today and the way we currently use them – this is already the past. You shouldn’t have to touch your phone. Think of leap motion. And why should it even be a phone? We have already ported Molarity and the Timer from Bench Tools onto Google Glass. We are actively working on adding the Protocol checklist to the GoogleGlass version of Bench Tools.

Voice activation, motion control, head-up-devices like Glass, watches, and the technology that we don’t even know about today because it will be announced tomorrow – that is what scientists will be using in the near future. But if you don’t go mobile and realize what the near future is – you are betting on the horse instead of the car.

Posted in Uncategorized | 2 Comments

Academic Assessment: Nature versus Nurture


There is a strong movement towards alternative metrics for assessing research and researchers. The San Francisco Declaration on Research Assessment is a terrific and eloquent argument in favor of judging the quality of the article rather than the impact factor of the journal where it is published. Both of the key products from our company – PubChase and Protocols – aim at undermining the impact factor by facilitating discovery and sharing of research knowledge, with no regard for the journal impact factor. However, when it comes to assessing academic researchers for hiring, funding, job promotion, and tenure, the alternative metrics movement does not go far enough. Whether it is journal or article-based assessment, we are assessing and rewarding the wrong metric.

In science, students and postdocs typically refer to professors as one of three: PI (principle investigator), mentor, or advisor. Strikingly, when assessing faculty, universities and funding agencies pay attention to the PI=research aspect only. Whether you are the best or the worst mentor in this world is virtually irrelevant for the tenure decision.

This is a travesty. We do not look for mentoring ability when we hire. We do not teach how to mentor. And we do not reward good mentoring in any way.

We already wrote about the fact that no one trains professors how to mentor. This is why we launched our Career Advice forum. As we wrote in January,

Academic faculty appointments at universities do not select for teaching or managing ability. We look for talented scientists, not mentors or teachers. And as with teaching, there is a natural distribution – some mentors are gems, most are mediocre, and some are nightmares.

If you are a group leader at Merck or Novartis, you will be trained on how to manage people and be a boss. Alas, no such mandatory training exists for professors. The tragedy of this is that a researcher at Merck, even if the boss is a disaster, can switch to Sanofi, Genentech, and so on (this researcher already has a PhD and likely has worked in the industry for many years). But a graduate student or postdoc is in a delicate relationship where a switch from an abusive mentor is far from easy. Advisors hold a power over their lab members that can be devastating when misapplied.

Obviously, making mentoring a part of the assessment equation is not trivial. How does one measure “good advising”? How can we suddenly base promotions on assessment of mentoring if we do not hire with any regard to it, do not teach how to advise, and do not incentivize it in any way?

Rewarding and measuring mentoring is hard. However, ignoring it entirely as we do now is costly and absurd. The lifetime contribution of any professor to the society, on any single topic of study, is dwarfed by the contribution of the scientists the professor trains. Moreover, from the perspective of the PI, there is often a tradeoff between using students and postdocs to get an article into Nature versus nurturing and mentoring these scientists.

How much should we weigh mentoring versus research output? Maybe 80% based on advising and 20% on research. Maybe it should be 50-50. Whatever the right number is, excluding mentoring entirely from the assessment of faculty is damaging our society, impeding science, and ruining lives.

Posted in Uncategorized | 4 Comments

What do Facebook “likes” of companies mean?

You are drinking a Coke at lunch. Do you feel compelled to go to Facebook and “like” the Coca Cola facebook page? You use Matlab to process your microscopy data. Do you express your gratitude for not having to count RNA spots by hand by “liking” Matlab? And if your physician prescribes a pill made by AstraZeneca, do you “like” the pharmaceutical? Well, someone does.

In June of 2013, we released our essay platform for scientists to tell the stories behind their research. A month before that, we tried to build up a following on Facebook, so that we would be able to make the author-contributed content more visible. We had about 100 “likes” from our science friends and decided to pay for promotion to accumulate more. This exercise (and the resulting “likes”) is still the most puzzling event in the two-year existence of our startup. For $50, we ended purchasing 900 empty likes that we still can’t get rid of to this day.

We are not talking about going to an outside agency to accumulate fake likes – something that Facebook prohibits. We are talking about Facebook giving companies the ability to pump cash into the generation of useless but impressive-sounding “likes” with a click of a button via the “promote your page” campaigns. A click that Facebook promises will allow you to “connect with more of the people who matter to you.” We planned to spend $400-500 to grow our “likes” by one or two hundred. It took us a day to realize that we were paying Facebook not for interested followers but for a meaningless number. For $50 in the very first day, we instantly felt the love from almost a thousand Facebook accounts in India. None of them seemed to be scientists, and we are far from certain that they are real people. Here are the countries and cities that “like” PubChase.

Of course, there are countless scientists in India, but the statistics from GoogleAnalytics on the use of our apps for scientists, be they on iOS or Android, clearly indicate that India is not our main user base1.


PubChase iOS PubChase Android Lab Counter iOS Lab
Counter Android
Bench Tools iOS Bench Tools Android
U.S. U.S. U.S. U.S. U.S. U.S.
UK UK Brazil Japan Germany China
China China UK Belgium Turkey Australia
Brazil Italy Mexico Brazil UK Netherlands
Germany Germany Australia Ireland S. Korea Poland
Poland India Italy Spain Netherlands Turkey
Canada Spain Japan China Thailand Germany
Italy Guatemala France Canada Ukraine Greece
Mexico Canada Canada Argentina Italy Austria
Turkey Vietnam Thailand Mexico China Italy

As we tried to understand why New Delhi citizens were so enamored with PubChase we bumped into the post titled, “Are 40% Of Life Science Company Facebook Page ‘Likes’ From Fake Users?” In our case, it seems to be about 90%.

We stopped our campaign right away, but it was too late. Facebook has no interface to remove these fake likes. You have to manually delete each follower and can only do so for a few dozen most recent ones. There is no way to clear the likes beyond the most recent. So, we got stuck with our following, and that means that it is senseless for us to promote any content on Facebook at this point.

This is far from an isolated incident, as is clear from the above post. A quick look at companies similar to ours, software for scientists, suggests that many fall into this trap2.

Naturally, this is not exclusive to software companies. The most popular city for Novartis and Pfizer is Cairo and Sanofi has the most loyal fans in Karachi, Pakistan. If we had to give a prize for the funniest example of this, it would have to be the “AstraZeneca US Community Relations” page on Facebook. It clearly says “This page is intented for US residents only” but its most popular city is Algiers.

Responding to a BBC investigation of fake likes in 2012, Facebook claimed:

“We’ve not seen evidence of a significant problem,” said a spokesman.

“Neither has it been raised by the many advertisers who are enjoying positive results from using Facebook.

All of these companies have access to Facebook’s analytics which allow them to see the identities of people who have liked their pages, yet this has not been flagged as an issue.

A very small percentage of users do open accounts using pseudonyms but this is against our rules and we use automated systems as well as user reports to help us detect them.”

There are a few simple reasons why the companies don’t raise it at as an issue. No one wants to admit that they buy ‘likes’ on Facebook. And certainly no one wants to admit to buying possibly fake ‘likes’. Most importantly, once tricked and sitting on a pile of these senseless ‘likes’, you at least want to get something for your money. So you stick this number on your landing page in the hope of convincing everyone that you have a strong community and that your product is taking off. It should not be surprising that we only felt confident enough to write about this travesty once we organically acquired a real user base for our products.

We have no idea whether these meaningless “likes” are the bane of Facebook’s anti-bot/spamming efforts or whether Facebook quietly enjoys the revenues from the purchases of these. Regardless, it is certainly misleading for Facebook to promote this as a way to connect to the people that matter to the advertiser/company.


  1. country stats based on 80,000 total views of the apps []
  2. Most popular city as of January 20, 2014 []
Posted in Uncategorized | 11 Comments

The Fake Open Access Scam

We have just enabled Lens-viewing of open access articles on PubChase, in collaboration with Ivan Grubisic1. Lens is an extraordinary step forward in visualization of research. Not only is it infinitely superior to PDFs, but it is even better than reading manuscript printouts. Figures are next to the text and you no longer need to hop around the articles between the text and references, constantly losing your place2. Alas, there is a wrinkle. We had hoped to Lensify all Pubmed Central free content, but turns out that we cannot because only a fraction of PMC content is truly open access; free to read does not mean open access.

The PMC content that we can legally display in the Lens format on PubChase is that which is under the Creative Commons Licenses. Most of these papers are from the PLOS, BiomedCentral, and Hindawii publishers. Unfortunately, almost 90% of PMC articles are free to read as PDFs, but are under restrictive publisher copyrights that make it illegal for PubChase to reformat them. Even author-submitted manuscripts in compliance with the NIH Public Access Policy are subject to the publisher copyright and we cannot display them.

This shocked me. While there has recently been much buzz about scams by new OA journals, especially with the Science Sting by Bohannon, the biggest scam is the one by subscription journals. Many erroneously assume that only open access journals charge a fee for publication, while subscription journals only charge for access. Far from it. My recent paper in PNAS cost $3,500 to publish with the following fees (excerpt from PNAS acceptance e-mail):

Payment of the page charge of $75 per printed page will be assessed from all authors who have funds available for that purpose. Payments of $300 per article for up to five pages of Supporting Information (SI), $600 per article for six or more pages of SI, and $350 per color figure or table will be assessed. Authors of research articles may pay a surcharge of $1,350 to make their paper freely available through the PNAS Open Access option. If your institution has a current Site License, the open access surcharge is $1,000. Payment by authors of the following additional costs is expected: $150 for each replacement or deletion of a color figure or table, $25 for each replacement of a black-and-white or SI figure, and $25 for manuscript file replacement. Proofs should be returned within 48 hours.

This is way more than the cost of publishing in PLOS One, or even PLOS Biology, not to mention PeerJ, and after publication PNAS would still charge for access to the paper. But the part that upsets me most is that on top of these fees, PNAS charges $13503 to publish an article as “open access”, and it now turns out that it’s not even open access and we cannot display it in Lens on PubChase.

While the scams of the shady OA journals are irritating, they are largely irrelevant. On the other hand, the scam by the subscription journals is outrageous and seriously damaging to science.

  1. Lens is an open access project that was initially sponsored by eLife. We are helping Ivan extend it beyond eLife to as much content as possible []
  2. Please note this is still in Beta. Because of lack of uniformity in XML formats submitted to PMC, Ivan has to handle the Lensification of articles separately for each publisher, and some links may not work yet depending on the article []
  3. The PNAS charge of $1350 is exactly what it costs to publish the entire article in PLOS One!!! []
Posted in Uncategorized | 6 Comments