Feed aggregator

New feature in PubMed: Clinical trials link to Systematic Reviews which cite them

In Plain Sight - Wed, 2014-04-02 12:25

If your PubMed search turns up a clinical trial, you may see a box linking to systematic reviews which have cited that trial. Not every clinical trial or systematic review in PubMed is included (yet). For now the National Library of Medicine is working on getting each of the 31,000 plus systematic reviews included in PubMed Health linked to the trials they cited.

Screen shot from PubMed

From the PubMed Health website: “PubMed Health provides information for consumers and clinicians on prevention and treatment of diseases and conditions.

PubMed Health specializes in reviews of clinical effectiveness research, with easy-to-read summaries for consumers as well as full technical reports. Clinical effectiveness research finds answers to the question “What works?” in medical and health care.”

For more information about this new feature, see http://www.nlm.nih.gov/pubs/techbull/jf14/brief/jf14_pm_health_blog_trials_sys_reviews.html

Categories: In Plain Sight

Solr/Blacklight highlighting and upgrading Blacklight from 5.1.0 to 5.3.0

CKM Blog - Mon, 2014-03-31 13:22

Last week, I ran into a highlighting issue with Blacklight where clicking on a facet results in the blanking out of the values of the fields with highlighting turned on.  I debugged into the Blacklight 5.3.0 gem and found that in document_presenter.rb, it displays the highlight snippet from Solr response highlighting.  If nothing is returned from Solr highlighting, then it returns null to the view.

when (field_config and field_config.highlight) # retrieve the document value from the highlighting response @document.highlight_field(field_config.field).map { |x| x.html_safe } if @document.has_highlight_field? field_config.field

This seemed strange to me because I couldn’t always guarantee that Solr returned something for the highlighting field.  So I posted to the Blacklight user’s group with my question.  I got a response right away (thank you!) and it turns out Blacklight inherits Solr’s highlighting behavior.  In order to always return a value for the highlighting field, an hl.alternateField is needed in the Solr configuration.

Here’s my code in the catalog_controller.rb that enables highlighting:

configure_blacklight do |config| ## Default parameters to send to solr for all search-like requests. See also SolrHelper#solr_search_params config.default_solr_params = { :qt => 'search', :rows => 10, :fl => 'dt pg bn source dd ti id score', :"hl.fl" => 'dt pg bn source', :"f.dt.hl.alternateField" => 'dt', :"f.pg.hl.alternateField" => 'pg', :"f.bn.hl.alternateField" => 'bn', :"f.source.hl.alternateField" => 'source', :"hl.simple.pre" => '', :"hl.simple.post" => '', :hl => true } ... config.add_index_field 'dt', :label => 'Document type', :highlight => true config.add_index_field 'bn', :label => 'Bates number', :highlight => true config.add_index_field 'source', :label => 'Source', :highlight => true config.add_index_field 'pg', :label => 'Pages', :highlight => true

 

Another issue I ran into was upgrading from Blacklight 5.1.0 to 5.3.0. It does have an impact on the solrconfig.xml file.  It took me a bit of time to figure out the change that’s needed.

In the solrconfig.xml that ships with Blacklight 5.1.0, the standard requestHandler is set as the default.

<requestHandler name="standard" default="true" />

This means if the qt parameter is not passed in, Solr will use this request handler.  In fact, with version 5.1.0, which request handler is set as default is not important at all. In my solrconfig.xml, my own complex request handler is set as default and it did not cause any issues.

But in 5.3.0 the search request handler must be set as the default:

<requestHandler name="search" default="true">

This is because Blacklight now issues a Solr request like this:[Solr_server]:8983/solr/[core_name]/select?wt=ruby. Notice the absence of the qt parameter. The request is routed to the default search request handler to retrieve and facet records.

Categories: CKM

RefWorks Flow: A Free Document Management Tool

In Plain Sight - Mon, 2014-03-31 11:21

Looking for an alternative to Mendeley and Zotero? You’re a RefWorks user but want a tool that’s better suited to collaboration and document management?  You might want to take a look at RefWorks Flow.

 

 

 

 

 

 

 

 

 

 

 

Launched in 2013 Flow is designed to help researchers discover, store, and organize academic articles, citations, and metadata downloaded from electronic databases. and collaborate with other researchers. This cloud-based tool facilitates collaboration by allowing group annotation of articles, sharing of datasets, and group editing of draft documents.

Any student or faculty member with a verifiable academic email address can sign up for a free account, which offers 2GB of cloud storage, and the participation of up to 10 collaborators per project.

View a short online tutorial.

Categories: In Plain Sight

On Metrics

CKM Blog - Mon, 2014-03-24 16:36

Collecting metrics is important. But we all know that many metrics are chosen for collection because they are inexpensive and obvious, not because they are actually useful.

(Quick pre-emptive strike #1: I’m using metrics very broadly here. Yes, sometimes I really mean measurements, etc. For better or for worse, this is the way metrics is used in the real world. Oh well.)

(Quick pre-emptive strike #2: Sure, if you’re Google or Amazon, you probably collect crazy amounts of data that allow highly informative and statistically valid metrics through sophisticated tools. I’m not talking about you.)

I try to avoid going the route of just supplying whatever numbers I can dig up and hope that it meets the person’s need. Instead, I ask the requester to tell me what it is they are trying to figure out and how they think they will interpret the data they receive. If pageviews have gone up 10% from last year, what does that tell us? How will we act differently if pageviews have only gone up 3%?

This has helped me avoid iterative metric fishing expeditions. People often ask for statistics hoping that, when the data comes back, it will tell an obvious story that they like. Usually it doesn’t tell any obvious story or tells a story they don’t like, so they start fishing. “Now can you also give me the same numbers for our competitors?” “Now can you divide these visitors into various demographics?”

When I first started doing this, I was afraid that people would get frustrated with my push-back on their requests. For the most part, that didn’t happen.

Instead, people started asking better questions as they thought through and explained how the data would be interpreted. And I felt better about spending resources getting people the information they need because I understood its value.

Just like IT leaders need to “consistently articulate the business value of IT”, it is healthy for data requesters to articulate the value of their data requests.

Categories: CKM

Headless JavaScript Testing, Continuous Integration, and Jasmine 2.0

CKM Blog - Mon, 2014-03-17 15:29

Earlier this month, my attention was caught by a short article entitled “Headless Javascript testing with Jasmine 2.0” by Lorenzo Planas. Integrating our Jasmine tests on Ilios with our Travis continuous integration had been on my list of things to procrastinate on. The time had come to address it.

After integrating Lorenzo’s very helpful sample into our code base, we ran into a couple of issues. First, the script was exiting before the specs were finished running. The sample code had a small number of specs to run so it never ran into that problem. Ilios has hundreds of specs and seemed to exit after running around 13 or so.

I patched the code to have it wait until it saw an indication in the DOM that the specs had finished running. Now we ran into the second issue: The return code from the script indicated success even when one of the specs failed. For Travis, it needed to supply a return code indicating failure. That was an easy enough patch, although I received props for cleverness from a teammate.

I sent a pull request to the original project so others could benefit from the changes. Lorenzo not only merged the pull request but put a nice, prominent note on the article letting people know, even linking to my GitHub page (which I then hurriedly updated).

So, if you’re using Jasmine and Travis but don’t have the two yet integrated, check out Lorenzo’s repo on GitHub and stop procrastinating!

Categories: CKM

Working with Blacklight Part 3 – Linking to Your Solr Index

CKM Blog - Tue, 2014-03-11 09:07

We are using Blacklight to provide a search interface for a Solr index.  I expected it to be super straightforward to plug in our Solr index to the Blacklight configuration.  It wasn’t quite the case! Most of the basic features do plugin nicely, but if you use more advanced Solr features (like facet pivot) or if your solrconfig.xml differs from the Blacklight example solrconfig.xml file, then you are out of luck.  There is not currently much documentation to help you out.

SolrConfig.xml – requestDispatcher

After 3.6, Solr ships with <requestDispatcher handleSelect=”false”> in the solrconfig.xml file.  But Blacklight works with <requestDispatcher handleSelect=”true”>, and passes in the parameter qt (request handler) explicitly .  An example of a SOLR request sent by Blacklight looks like this: http://example.com:8983/solr/ltdl3test/select?q=dt:email&qt=document&wt=xml.

/select request handler should not be defined in solrconfig.xml. This allows the request dispatcher to dispatch to the request handler specified in the qt parameter. Blacklight, by default, expects a search and a document request handler (note the absence of /).

We could override the controller code for Blacklight to call our request handlers.  But a simpler solution is to update the solrconfig.xml to follow the Blacklight convention.

The ‘document’ Request Handler and id Passing

Blacklight expects there to be a document request handler defined in the solrconfig.xml file like this:

<!-- for requests to get a single document; use id=666 instead of q=id:666--> <requestHandler name="document" class="solr.SearchHandler"> <lst name="defaults"> <str name="echoParams">all</str> <str name="fl">*</str> <str name="rows">1</str> <str name="q">{!raw f=id v=$id}</str> <!-- use id=666 instead of q=id:666 --> </lst> </requestHandler>

As the comment says, Blacklight will pass in the request to SOLR in the format of id=666 instead of q=id:666.  It achieves this by using the SOLR raw query parser.  However, this only works if your unique id is a String.  In our case, the unique id is a long and passing in id=666 does not return anything in the SOLR response.

There are two ways to solve this issue.  The first is to rebuild the index and change the id type from long to String.  The other is to override solr_helper.rb to pass in q=id:xxx instead of id=xxx.  And the code snippet is below.

require "#{Blacklight.root}/lib/blacklight/solr_helper.rb" module Blacklight::SolrHelper extend ActiveSupport::Concern # returns a params hash for finding a single solr document (CatalogController #show action) # If the id arg is nil, then the value is fetched from params[:id] # This method is primary called by the get_solr_response_for_doc_id method. def solr_doc_params(id=nil) id ||= params[:id] p = blacklight_config.default_document_solr_params.merge({ #:id => id # this assumes the document request handler will map the 'id' param to the unique key field :q => "id:" + id.to_s }) p[:qt] ||= 'document' p end end Getting Facet Pivot to work

In our index, we have a top-level facet called industry and a child facet called source that should be displayed in a hierarchical tree.    It should look something like:

The correct configuration is in the code snippet below.

#Industry config.add_facet_field 'industry', :label => 'Industry', :show => false # Source config.add_facet_field 'source_facet', :label => 'Source', :show => false #Industry -> Source config.add_facet_field 'industry_source_pivot_field', :label => 'Industry/Source', :pivot => ['industry', 'source_facet']

You must add the two base fields  (Industry and Source) to the catalog_controller.rb file and set :show => false if they should not be displayed.  And it usually is the case since the data is already displayed in the pivot tree.  The current documentation on Blacklight facet pivot support makes it seem like only the last line is needed.  But if only the last line is defined, then the facet pivot will render correctly in the refine panel and it makes you think that facet pivot is working OK. But when you click on the facet, you will get an error, “undefined method ‘label’ for nil:NilClass”:

Categories: CKM

Improving Code Quality: Together. Like We Used To. Like a Family.

CKM Blog - Tue, 2014-03-04 14:44

We had a day-long session of hacking trying to improve the code quality and test coverage of Ilios.

This post is clearly not a step-by-step instruction manual on transforming an intimidatingly large pile of spaghetti code into a software engineering masterpiece. I hope video is posted soon of Dan Gribbin’s jQuery Conference presentation from last month to see what wisdom and techniques I can steal acquire.

In the meantime, here are a few small things we learned doing the first session.

  1. Give everyone concrete instructions ahead of time regarding how to get the app set up and how to get all the existing tests running. Have new folks arrive early for a get-setup session. This allows new folks to hit the ground running and let’s experienced folks begin coding or help people with interesting problems, rather than help people just get started.
  2. Decide on a focus ahead of time. Try to fix one class of bugs, write one type of test, complete coverage for one large component, or whatever. This allows for more collaboration as people are working on similar things.
  3. Do it often or you lose momentum! I suspect that weekly is too often. We’re trying once every two-to-three weeks.

P. S. If you recognized the reference in this post’s title, then clearly you have all the skills required to work here. We’re hiring a Front-End Engineer and the job is so good that HR pasted the description twice into the ad. Submit your résumé and knock our socks off.

Categories: CKM

Running Behat Tests via Sauce Labs on Travis-CI

CKM Blog - Mon, 2014-02-24 10:04

We use Behat for testing the Ilios code base. (We also use PHPUnit and Jasmine.) We started out using Cucumber but switched to Behat on the grounds that we should use PHP tools with our PHP project. Our thinking was that someone needing to dig in deep to write step code shouldn’t have to learn Ruby when the rest of the code was PHP.

We use Travis for continuous integration. Naturally, we needed to get our Behat tests running on Travis. Fortunately, there is already a great tutorial from about a year ago explaining how to do this.

Now let’s say you want to take things a step further. Let’s say you want your Behat tests to run on a variety of browsers and operating systems, not just whatever you can pull together on the Linux host running your Travis tests. One possibility is Sauce Labs, which is free for open source projects like Ilios.

Secure Environment Variables

Use the travis Ruby gem to generate secure environment variable values for your .travis.yml file containing your SAUCE_USERNAME and your SAUCE_ACCESS_KEY. See the heplful Travis documentation for more information.

Sauce Connect

You may be tempted to use the Travis addon for Sauce Connect. I don’t because, using the addon, Travis builds hang (and thus fail) when running the CI tests in a fork. This is because forks cannot read the secure environment variables generated in the previous step.

Instead, I check to see if SAUCE_USERNAME is available and, if so, then I run Sauce Connect using the same online bash script (located in a GitHub gist) used by the addon provided by Travis. (By the way, you can check for TRAVIS_SECURE_ENV_VARS if that feels better than checking for SAUCE_USERNAME.)

The specific line in .travis.yml that does this is:

- if [ "$SAUCE_USERNAME" ] ; then (curl -L https://gist.github.com/santiycr/5139565/raw/sauce_connect_setup.sh | bash); fi Use the Source, Luke

Now it’s time to get Behat/Mink to play nicely with Sauce Labs.

The good news is that there is a saucelabs configuration option. The bad news is that, as far as I can tell, it is not documented at the current time. So you may need to read the source code if you want to find out about configuration options or troubleshoot. Perhaps it’s intended to be released and documented in the next major release. Regardless, we’re using it and it’s working for us. Enable it in your behat.yml file:

default: extensions: Behat\MinkExtension\Extension: saucelabs: ~ Special Sauce

We keep our Behat profile for Sauce in it’s own file, because it’s special. Here’s our sauce.yml file:

# Use this profile to run tests on Sauce against the Travis-CI instance default: context: class: "FeatureContext" extensions: Behat\MinkExtension\Extension: base_url: https://localhost default_session: saucelabs javascript_session: saucelabs saucelabs: browser: "firefox" capabilities: platform: "Windows 7" version: 26

Note that we configured our app within Travis-CI to run over HTTPS. In a typical setting, you will want the protocol of your base_url to specify HTTP instead.

Here’s the line in our .travis.yml to run our Behat tests using the Sauce profile:

- if [ "$SAUCE_USERNAME" ] ; then (cd tests/behat && bin/behat -c sauce.yml); fi

Of course, if you’re using a different directory structure, you will need to adjust the command to reflect it.

That’s All, Folks!

I hope this has been helpful. It will no doubt be out of date within a few months, as things move quickly with Behat/Mink, Sauce Labs, and Travis-CI. I will try to keep it up to date and put a change log here at the bottom. Or if a better resource for this information pops up, I’ll just put a link at the top. Thank you for reading!

Categories: CKM

Redesigning the Legacy Tobacco Documents Library Site Part 1 — User Research

CKM Blog - Wed, 2014-02-19 11:21

The Legacy Tobacco Documents Library site (LTDL) is undergoing a user-centered redesign.  A user-centered design process (a key feature of user experience, or UX) is pretty much what it sounds like: every decision about how the site will work starts from the point of view of the target user’s needs.

As a UX designer, my job begins with user research to identify the target users, and engaging with these users to identify their actual needs (versus what we might assume they want).

Prior to my arrival, the LTDL team had already identified three target users: the novice (a newbie with little or no experience searching our site), the motivated user (someone who has not been trained in how to search our site, but is determined to dig in and get what they need. Unlike the novice, the motivated user won’t abandon their search efforts), and the super user (someone who has gone through LTDL search training and knows how to construct complex search queries).

Given this head start, I spent a few weeks conducting extensive user research with a handful of volunteers representing all three user types.  I used a combination of hands-off observation, casual interviews, and user testing of the existing site to discover:

    • what the user expects from the LTDL search experience
    • what they actually need to feel successful in their search efforts
    • what they like about the current site
    • what they’d like to change about the current site

Lessons learned will guide my design decisions for the rest of the process.  Below you’ll find excerpts from the User Research Overview presentation I delivered to my team:


In addition to engaging directly with users, I did a deep dive into the site analytics.  The data revealed the surprising statistic that most of the LTDL site traffic (75%) originated from external search engines like Google.  The data further revealed that once these users got to our site, they were plugging in broad search terms (like tobacco or cancer) that were guaranteed to return an overwhelming number of results.  This meant that most of our users were novices and motivated users, not the super users we were used to thinking about and catering to.

This information exposed the key problem to be solved with the LTDL redesign: how to build an easy-to-use search engine that teaches the user how to return quality results, without dumbing down the experience for our super users.

Categories: CKM

5 Questions with Dr. Daniel Lowenstein

The Better Presenter - Mon, 2013-07-29 08:30

In the previous post, we were introduced to Dr. Daniel Lowenstein and his “Last Lecture” presentation, which was both powerful and inspiring. Shortly after writing the post, Dr. Lowenstein contacted me, and we had an interesting discussion about his experience preparing for, and delivering that presentation.

I have always wanted to incorporate the voices of the instructors, students, and staff at UCSF, who work in the trenches and present or attend presentations on a daily basis. This post marks the beginning of a new series that will feature interviews of those people. I hope you enjoy the first episode of “5 Questions!”

5 Questions with Dr. Lowenstein

Bonus track: The Basement People

The full version of the original presentation has recently been uploaded to the UCSF Public Relations YouTube channel, so please head over there to watch the video, like it, and leave your comments!

If you have any ideas about who the next 5 Questions interviewee should be, please contact me or leave your ideas in the comments section below.

Categories: Better Presenter

Top 5 Lessons Learned from The Last Lecture

The Better Presenter - Thu, 2013-05-16 12:58

Powerful. Inspirational. Emotionally moving.

Those are the words that best describe Dr. Daniel Lowenstein’s “The Last Lecture” presentation, delivered to a packed house in Cole Hall on April 25th. The Last Lecture is an annual lecture series hosted by a UCSF professional school government group (and inspired by the original last lecture), in which the presenter is hand-picked by students and asked to respond to the question, ”If you had but one lecture to give, what would you say?” Dr. Daniel Lowenstein, epilepsy specialist and director of the UCSF Epilepsy Center, did not disappoint. In fact, I can say with confidence that he delivered one of the best presentations that I have attended.

Rather than attempt to paraphrase his words, or provide a Cliff Notes version that doesn’t do his presentation justice, I will instead encourage you to watch the video recording of his presentation. The video is an hour in length, and if you have any interest in becoming a better presenter yourself, it is a must-watch. After the jump, we’ll explore my top “top 5 lessons learned” from Dr. Lowenstein’s presentation.

Last Lecture – Top 5 Lessons Learned:

  1. “PowerPoint” is still boring. Dr. Lowenstein’s projected slide show was not typical PowerPoint. It did not consist of any bullet points, familiar and boring templates, or images “borrowed” from a last minute Google image search. Instead, used images from his own collection, and Prezi to build a canvas of images that moved in all directions, expanding, contracting and rotating to craft his message. The resulting slide show was personal, meaningful and most importantly, relatable.
  2. Story telling is the secret to success. When I first began studying the art of presenting, the idea of incorporating storytelling into a presentation was an elusive one. I am now convinced that storytelling is the secret to transforming a good presentation, into a great presentation. It is the glue that holds all of the elements of your presentation together, as well as the glitter that makes it shine. Dr. Lowenstein’s entire presentation was crafted into a story, the setting of which was established right from the beginning and illustrated by his first content slide. There were also chapters within the story, the most memorable of which for me was the Justice segment of his presentation, and his depiction of The Basement People. He didn’t begin by pointing out the original members of the UCSF Black Caucus that were in the audience, as most presenters would have done. Instead, he gradually painted a picture for us, so we could imagine what it was like to be a minority at UCSF over 50 years ago. He described their struggles in detail, and gave us time to relate, and even pointed out the fact that they had met in that very hall where we all sat. He didn’t reveal their presence until the end of the chapter, creating a crescendo of emotion, and the moment brought tears to the eyes of many audience members.
  3. Vulnerability equals trust. If you want your audience to believe in your message, you must first give them a reason to believe in you. And one of the most effective ways to make that happen is to share your vulnerabilities. In the eyes of the audience, this makes the presenter human, and it creates a bond between both parties. No one wants to listen to a sales-pitch presentation. Instead, they want the whole story with the ups and downs, so they can decide how we feel about it on their own terms. Just be sure to share vulnerabilities that relate to the subject of the presentation, because you’re going for empathy, not sympathy (which could have a negative effect). Dr. Lowenstein, when talking about Joy and Sorrow, shared one of his deepest personal sorrows, which was the unexpected passing of his son. In contrast, he shared a touching moment with his wife, expressing his love for her, right in front of the whole audience. These moments worked perfectly in the presentation because they were genuine, and they gave the audience a deeper understanding of Dr. Lowenstein.
  4. Don’t forget humor. No matter how serious, no matter how technical, there is a place in your presentation for a little humor. It can be used to lighten a heavy moment, open closed minds, and bring everyone in a room together (even if your audience members have very different backgrounds). Amidst Dr. Lowenstein’s presentation were timely moments of humor that seemed to come naturally from his personality. And hey, who doesn’t like a good male-patterened-baldness joke, anyway?! But seriously, if you can laugh at yourself, the audience has no excuse to not laugh along with you. There are two keys to using humor in your presentation; (1) it should be relevant to the current topic or story, and (2) it can’t be forced. If you’re not good at telling jokes, then try another form of humor!
  5. Present on your passions. As a presenter, your goal is simple – to instill in the audience an understanding of your message, and a belief in you. If you give them the impression, even for a moment, that you don’t believe in yourself or the message you’re presenting, you’re a dead man walking (or presenting) in the audience’s eyes. If you choose topics that you are passionate about, however, you will never have this problem. You may think it was easy for Dr. Lowenstein’s to be passionate about his presentation, because his task was, in essence, to present about his life’s passions… but I can assure you, it’s not easy to talk about your own life in front of an audience. In contrast, imagine that you have to give a presentation on, say, your department’s new accounting policies. To make matters worse, imagine that your audience is being forced to attend. What do you do? Surely, there is no passion to be found in accounting policy, is there?! Well, actually, there is, if you take the right angle. For example, does this new accounting policy save the department time, or money? And then, can that saved time and money be applied towards more constructive, or creative tasks that your coworkers actually want to do? If so, and you frame the presentation in a positive light, the audience will listen.

To top it all off, Dr. Lowenstein spent the last few minutes of his presentation reviewing each of the 4 segments of his talk, and then related it all back to a single, clear message. That, my friends, is an example of storytelling 101, so I hope you were talking notes!

Continue on to part 2 of this post, where I interview Dr. Lowenstein about his experiences preparing for and delivering the Last Lecture presentation!

If you also found inspiration in Dr. Lowenstein’s presentation, please share your thoughts below, and I’ll see you at next year’s “Last Lecturer” event.

Categories: Better Presenter
Syndicate content