Skip navigation


It’s All About Audience

CKM Blog - Fri, 2014-04-11 16:01

Back in February, the new Web Projects Team made known our purpose and guiding principles. All of that still holds true, but we realized that “Support education and meet the research needs of our users regardless of location or device” might need some clarification. UCSF is a somewhat unique academic institution having more staff than students and no undergraduates, among other things. So who is the primary audience that the library supports?

Primary Audiences served by the Library

  1. Teaching faculty
    • usually also involved in clinical research or practice or basic science research
  2. Students in degree programs
    • professional students in medicine, pharmacy, nursing, and dentistry
    • graduate students in basic science
    • graduate students in social sciences, nursing, and history
  3. Researchers in basic science or clinical medicine
    • faculty
    • postdocs
    • PhD students
    • lab managers/research staff

Notice that there is a fair amount of overlap between audiences with some people wearing multiple hats.

Of course there are others who use the Library too, for example, alumni, the public, visitors, Library staff, outside librarians, etc. They can all still benefit from parts of our site, but their needs will not drive decisions about how to structure our web pages and services. Ultimately, everything about the UCSF Library web should make it easier and more intuitive for the three audiences listed above to meet their research and education needs. All else is secondary, though not necessarily unimportant.

UCSF by the numbers

To define these audiences, we began by simply consulting the counts already provided by UCSF. However, those completely ignore Lab Managers and Research Assistants who have many of the same library needs as postdocs. There are also other staff members who do a lot of legwork for faculty, and therefore, reflect the library needs of faculty even though they are not counted as such. And if you talk about “students,” you must realize that the library needs of a medical student are completely different from those of a social sciences PhD. This means that the numbers are a rough estimate for our purposes.

These less obvious realities were gleaned from talking to people. The Library already tends to focus a lot on the Service Desk and subject liaisons when thinking about user interactions. To balance that, we decided to interview a variety of other library employees who act as liaisons to various user segments with library needs. A big thank you goes out to these individuals who took the time to share their super-valuable insights about user work patterns, language, and challenges!

  • Megan Laurance on basic science researchers
  • Art Townsend on Mission Bay users
  • Ben Stever and Kirk Hudson on Tech Commons users
  • Polina Ilieva and Maggie Hughes on researchers of special collections and archives
  • Dylan Romero on those who use multimedia stations and equipment and the CLE

A few other sources of insight came from meetings of the Student Advisory Board to the Library, LibQual feedback, and the Resource Access Improvement group.

We also came to the conclusion that it is helpful to think about users in terms of what they DO rather than by title alone. It’s the nature of their work that really defines their needs regarding library support. Once again the numbers are a rough estimate, but the segmentation they reveal is still helpful.

Next Steps

The Web Projects Team will continue to make iterative improvements to the Library web presence, some small and some larger, driven by our now established Purpose and Guiding Principles and through the lens of our primary audiences.

We will also be regularly checking feedback from end users via usage statistics and quick user tests, and that will in turn, drive further improvements. In addition, we’ll continue to share about the evolution of the Library web and improvements to the user experience. If you have questions or comments on any of this, we’re all ears!

photo credit: Reuver via photopin cc

Categories: CKM

Solr/Blacklight highlighting and upgrading Blacklight from 5.1.0 to 5.3.0

CKM Blog - Mon, 2014-03-31 13:22

Last week, I ran into a highlighting issue with Blacklight where clicking on a facet results in the blanking out of the values of the fields with highlighting turned on.  I debugged into the Blacklight 5.3.0 gem and found that in document_presenter.rb, it displays the highlight snippet from Solr response highlighting.  If nothing is returned from Solr highlighting, then it returns null to the view.

when (field_config and field_config.highlight) # retrieve the document value from the highlighting response @document.highlight_field(field_config.field).map { |x| x.html_safe } if @document.has_highlight_field? field_config.field

This seemed strange to me because I couldn’t always guarantee that Solr returned something for the highlighting field.  So I posted to the Blacklight user’s group with my question.  I got a response right away (thank you!) and it turns out Blacklight inherits Solr’s highlighting behavior.  In order to always return a value for the highlighting field, an hl.alternateField is needed in the Solr configuration.

Here’s my code in the catalog_controller.rb that enables highlighting:

configure_blacklight do |config| ## Default parameters to send to solr for all search-like requests. See also SolrHelper#solr_search_params config.default_solr_params = { :qt => 'search', :rows => 10, :fl => 'dt pg bn source dd ti id score', :"hl.fl" => 'dt pg bn source', :"f.dt.hl.alternateField" => 'dt', :"" => 'pg', :"" => 'bn', :"f.source.hl.alternateField" => 'source', :"hl.simple.pre" => '', :"" => '', :hl => true } ... config.add_index_field 'dt', :label => 'Document type', :highlight => true config.add_index_field 'bn', :label => 'Bates number', :highlight => true config.add_index_field 'source', :label => 'Source', :highlight => true config.add_index_field 'pg', :label => 'Pages', :highlight => true


Another issue I ran into was upgrading from Blacklight 5.1.0 to 5.3.0. It does have an impact on the solrconfig.xml file.  It took me a bit of time to figure out the change that’s needed.

In the solrconfig.xml that ships with Blacklight 5.1.0, the standard requestHandler is set as the default.

<requestHandler name="standard" default="true" />

This means if the qt parameter is not passed in, Solr will use this request handler.  In fact, with version 5.1.0, which request handler is set as default is not important at all. In my solrconfig.xml, my own complex request handler is set as default and it did not cause any issues.

But in 5.3.0 the search request handler must be set as the default:

<requestHandler name="search" default="true">

This is because Blacklight now issues a Solr request like this:[Solr_server]:8983/solr/[core_name]/select?wt=ruby. Notice the absence of the qt parameter. The request is routed to the default search request handler to retrieve and facet records.

Categories: CKM

On Metrics

CKM Blog - Mon, 2014-03-24 16:36

Collecting metrics is important. But we all know that many metrics are chosen for collection because they are inexpensive and obvious, not because they are actually useful.

(Quick pre-emptive strike #1: I’m using metrics very broadly here. Yes, sometimes I really mean measurements, etc. For better or for worse, this is the way metrics is used in the real world. Oh well.)

(Quick pre-emptive strike #2: Sure, if you’re Google or Amazon, you probably collect crazy amounts of data that allow highly informative and statistically valid metrics through sophisticated tools. I’m not talking about you.)

I try to avoid going the route of just supplying whatever numbers I can dig up and hope that it meets the person’s need. Instead, I ask the requester to tell me what it is they are trying to figure out and how they think they will interpret the data they receive. If pageviews have gone up 10% from last year, what does that tell us? How will we act differently if pageviews have only gone up 3%?

This has helped me avoid iterative metric fishing expeditions. People often ask for statistics hoping that, when the data comes back, it will tell an obvious story that they like. Usually it doesn’t tell any obvious story or tells a story they don’t like, so they start fishing. “Now can you also give me the same numbers for our competitors?” “Now can you divide these visitors into various demographics?”

When I first started doing this, I was afraid that people would get frustrated with my push-back on their requests. For the most part, that didn’t happen.

Instead, people started asking better questions as they thought through and explained how the data would be interpreted. And I felt better about spending resources getting people the information they need because I understood its value.

Just like IT leaders need to “consistently articulate the business value of IT”, it is healthy for data requesters to articulate the value of their data requests.

Categories: CKM

Headless JavaScript Testing, Continuous Integration, and Jasmine 2.0

CKM Blog - Mon, 2014-03-17 15:29

Earlier this month, my attention was caught by a short article entitled “Headless Javascript testing with Jasmine 2.0” by Lorenzo Planas. Integrating our Jasmine tests on Ilios with our Travis continuous integration had been on my list of things to procrastinate on. The time had come to address it.

After integrating Lorenzo’s very helpful sample into our code base, we ran into a couple of issues. First, the script was exiting before the specs were finished running. The sample code had a small number of specs to run so it never ran into that problem. Ilios has hundreds of specs and seemed to exit after running around 13 or so.

I patched the code to have it wait until it saw an indication in the DOM that the specs had finished running. Now we ran into the second issue: The return code from the script indicated success even when one of the specs failed. For Travis, it needed to supply a return code indicating failure. That was an easy enough patch, although I received props for cleverness from a teammate.

I sent a pull request to the original project so others could benefit from the changes. Lorenzo not only merged the pull request but put a nice, prominent note on the article letting people know, even linking to my GitHub page (which I then hurriedly updated).

So, if you’re using Jasmine and Travis but don’t have the two yet integrated, check out Lorenzo’s repo on GitHub and stop procrastinating!

Categories: CKM

Working with Blacklight Part 3 – Linking to Your Solr Index

CKM Blog - Tue, 2014-03-11 09:07

We are using Blacklight to provide a search interface for a Solr index.  I expected it to be super straightforward to plug in our Solr index to the Blacklight configuration.  It wasn’t quite the case! Most of the basic features do plugin nicely, but if you use more advanced Solr features (like facet pivot) or if your solrconfig.xml differs from the Blacklight example solrconfig.xml file, then you are out of luck.  There is not currently much documentation to help you out.

SolrConfig.xml – requestDispatcher

After 3.6, Solr ships with <requestDispatcher handleSelect=”false”> in the solrconfig.xml file.  But Blacklight works with <requestDispatcher handleSelect=”true”>, and passes in the parameter qt (request handler) explicitly .  An example of a SOLR request sent by Blacklight looks like this:

/select request handler should not be defined in solrconfig.xml. This allows the request dispatcher to dispatch to the request handler specified in the qt parameter. Blacklight, by default, expects a search and a document request handler (note the absence of /).

We could override the controller code for Blacklight to call our request handlers.  But a simpler solution is to update the solrconfig.xml to follow the Blacklight convention.

The ‘document’ Request Handler and id Passing

Blacklight expects there to be a document request handler defined in the solrconfig.xml file like this:

<!-- for requests to get a single document; use id=666 instead of q=id:666--> <requestHandler name="document" class="solr.SearchHandler"> <lst name="defaults"> <str name="echoParams">all</str> <str name="fl">*</str> <str name="rows">1</str> <str name="q">{!raw f=id v=$id}</str> <!-- use id=666 instead of q=id:666 --> </lst> </requestHandler>

As the comment says, Blacklight will pass in the request to SOLR in the format of id=666 instead of q=id:666.  It achieves this by using the SOLR raw query parser.  However, this only works if your unique id is a String.  In our case, the unique id is a long and passing in id=666 does not return anything in the SOLR response.

There are two ways to solve this issue.  The first is to rebuild the index and change the id type from long to String.  The other is to override solr_helper.rb to pass in q=id:xxx instead of id=xxx.  And the code snippet is below.

require "#{Blacklight.root}/lib/blacklight/solr_helper.rb" module Blacklight::SolrHelper extend ActiveSupport::Concern # returns a params hash for finding a single solr document (CatalogController #show action) # If the id arg is nil, then the value is fetched from params[:id] # This method is primary called by the get_solr_response_for_doc_id method. def solr_doc_params(id=nil) id ||= params[:id] p = blacklight_config.default_document_solr_params.merge({ #:id => id # this assumes the document request handler will map the 'id' param to the unique key field :q => "id:" + id.to_s }) p[:qt] ||= 'document' p end end Getting Facet Pivot to work

In our index, we have a top-level facet called industry and a child facet called source that should be displayed in a hierarchical tree.    It should look something like:

The correct configuration is in the code snippet below.

#Industry config.add_facet_field 'industry', :label => 'Industry', :show => false # Source config.add_facet_field 'source_facet', :label => 'Source', :show => false #Industry -> Source config.add_facet_field 'industry_source_pivot_field', :label => 'Industry/Source', :pivot => ['industry', 'source_facet']

You must add the two base fields  (Industry and Source) to the catalog_controller.rb file and set :show => false if they should not be displayed.  And it usually is the case since the data is already displayed in the pivot tree.  The current documentation on Blacklight facet pivot support makes it seem like only the last line is needed.  But if only the last line is defined, then the facet pivot will render correctly in the refine panel and it makes you think that facet pivot is working OK. But when you click on the facet, you will get an error, “undefined method ‘label’ for nil:NilClass”:

Categories: CKM

Improving Code Quality: Together. Like We Used To. Like a Family.

CKM Blog - Tue, 2014-03-04 14:44

We had a day-long session of hacking trying to improve the code quality and test coverage of Ilios.

This post is clearly not a step-by-step instruction manual on transforming an intimidatingly large pile of spaghetti code into a software engineering masterpiece. I hope video is posted soon of Dan Gribbin’s jQuery Conference presentation from last month to see what wisdom and techniques I can steal acquire.

In the meantime, here are a few small things we learned doing the first session.

  1. Give everyone concrete instructions ahead of time regarding how to get the app set up and how to get all the existing tests running. Have new folks arrive early for a get-setup session. This allows new folks to hit the ground running and let’s experienced folks begin coding or help people with interesting problems, rather than help people just get started.
  2. Decide on a focus ahead of time. Try to fix one class of bugs, write one type of test, complete coverage for one large component, or whatever. This allows for more collaboration as people are working on similar things.
  3. Do it often or you lose momentum! I suspect that weekly is too often. We’re trying once every two-to-three weeks.

P. S. If you recognized the reference in this post’s title, then clearly you have all the skills required to work here. We’re hiring a Front-End Engineer and the job is so good that HR pasted the description twice into the ad. Submit your résumé and knock our socks off.

Categories: CKM

Running Behat Tests via Sauce Labs on Travis-CI

CKM Blog - Mon, 2014-02-24 10:04

We use Behat for testing the Ilios code base. (We also use PHPUnit and Jasmine.) We started out using Cucumber but switched to Behat on the grounds that we should use PHP tools with our PHP project. Our thinking was that someone needing to dig in deep to write step code shouldn’t have to learn Ruby when the rest of the code was PHP.

We use Travis for continuous integration. Naturally, we needed to get our Behat tests running on Travis. Fortunately, there is already a great tutorial from about a year ago explaining how to do this.

Now let’s say you want to take things a step further. Let’s say you want your Behat tests to run on a variety of browsers and operating systems, not just whatever you can pull together on the Linux host running your Travis tests. One possibility is Sauce Labs, which is free for open source projects like Ilios.

Secure Environment Variables

Use the travis Ruby gem to generate secure environment variable values for your .travis.yml file containing your SAUCE_USERNAME and your SAUCE_ACCESS_KEY. See the heplful Travis documentation for more information.

Sauce Connect

You may be tempted to use the Travis addon for Sauce Connect. I don’t because, using the addon, Travis builds hang (and thus fail) when running the CI tests in a fork. This is because forks cannot read the secure environment variables generated in the previous step.

Instead, I check to see if SAUCE_USERNAME is available and, if so, then I run Sauce Connect using the same online bash script (located in a GitHub gist) used by the addon provided by Travis. (By the way, you can check for TRAVIS_SECURE_ENV_VARS if that feels better than checking for SAUCE_USERNAME.)

The specific line in .travis.yml that does this is:

- if [ "$SAUCE_USERNAME" ] ; then (curl -L | bash); fi Use the Source, Luke

Now it’s time to get Behat/Mink to play nicely with Sauce Labs.

The good news is that there is a saucelabs configuration option. The bad news is that, as far as I can tell, it is not documented at the current time. So you may need to read the source code if you want to find out about configuration options or troubleshoot. Perhaps it’s intended to be released and documented in the next major release. Regardless, we’re using it and it’s working for us. Enable it in your behat.yml file:

default: extensions: Behat\MinkExtension\Extension: saucelabs: ~ Special Sauce

We keep our Behat profile for Sauce in it’s own file, because it’s special. Here’s our sauce.yml file:

# Use this profile to run tests on Sauce against the Travis-CI instance default: context: class: "FeatureContext" extensions: Behat\MinkExtension\Extension: base_url: https://localhost default_session: saucelabs javascript_session: saucelabs saucelabs: browser: "firefox" capabilities: platform: "Windows 7" version: 26

Note that we configured our app within Travis-CI to run over HTTPS. In a typical setting, you will want the protocol of your base_url to specify HTTP instead.

Here’s the line in our .travis.yml to run our Behat tests using the Sauce profile:

- if [ "$SAUCE_USERNAME" ] ; then (cd tests/behat && bin/behat -c sauce.yml); fi

Of course, if you’re using a different directory structure, you will need to adjust the command to reflect it.

That’s All, Folks!

I hope this has been helpful. It will no doubt be out of date within a few months, as things move quickly with Behat/Mink, Sauce Labs, and Travis-CI. I will try to keep it up to date and put a change log here at the bottom. Or if a better resource for this information pops up, I’ll just put a link at the top. Thank you for reading!

Categories: CKM

Redesigning the Legacy Tobacco Documents Library Site Part 1 — User Research

CKM Blog - Wed, 2014-02-19 11:21

The Legacy Tobacco Documents Library site (LTDL) is undergoing a user-centered redesign.  A user-centered design process (a key feature of user experience, or UX) is pretty much what it sounds like: every decision about how the site will work starts from the point of view of the target user’s needs.

As a UX designer, my job begins with user research to identify the target users, and engaging with these users to identify their actual needs (versus what we might assume they want).

Prior to my arrival, the LTDL team had already identified three target users: the novice (a newbie with little or no experience searching our site), the motivated user (someone who has not been trained in how to search our site, but is determined to dig in and get what they need. Unlike the novice, the motivated user won’t abandon their search efforts), and the super user (someone who has gone through LTDL search training and knows how to construct complex search queries).

Given this head start, I spent a few weeks conducting extensive user research with a handful of volunteers representing all three user types.  I used a combination of hands-off observation, casual interviews, and user testing of the existing site to discover:

    • what the user expects from the LTDL search experience
    • what they actually need to feel successful in their search efforts
    • what they like about the current site
    • what they’d like to change about the current site

Lessons learned will guide my design decisions for the rest of the process.  Below you’ll find excerpts from the User Research Overview presentation I delivered to my team:

In addition to engaging directly with users, I did a deep dive into the site analytics.  The data revealed the surprising statistic that most of the LTDL site traffic (75%) originated from external search engines like Google.  The data further revealed that once these users got to our site, they were plugging in broad search terms (like tobacco or cancer) that were guaranteed to return an overwhelming number of results.  This meant that most of our users were novices and motivated users, not the super users we were used to thinking about and catering to.

This information exposed the key problem to be solved with the LTDL redesign: how to build an easy-to-use search engine that teaches the user how to return quality results, without dumbing down the experience for our super users.

Categories: CKM

Working with Blacklight Part 2 – displaying the result count

CKM Blog - Tue, 2014-02-11 11:26

This is the second in of a series of posts about customizing Blacklight.

Last time, we implemented a feature that emailed a list of saved searches. We’d also like to display the number of results retrieved by each search. This task is a good way to learn about how a Solr response is stored and processed in Blacklight.

You can either start from a clean installation of Blacklight or build on the results of the previous exercise. A completed version is available on GitHub at

Step 1: Add a “numresults” attribute to the Search model

Search history and saved searches are stored in an array of Search objects. The Search model in Blacklight holds the query_params for a search but doesn’t store the number of results. We’ll add an attribute, “numresults”, to store this value.

There are a few ways to do this in Rails – here, we’ll go with a migration.

rails g migration add_numresults_to_search numresults:integer

This should produce a new migration

class AddNumfoundToSearch < ActiveRecord::Migration def change add_column :searches, :numfound, :integer end end

.. and run the migration

rake db:migrate

You may want to inspect the new schema or object to make sure that the model has been modified properly.

Step 1: Retrieve the number of results and store them in the Search object

Searches are created and stored in the search_context.rb class in the Blacklight gem (under lib/assets/blacklight/catalog/search_context.rb).

saved_search ||= begin s = Search.create(:query_params => params_copy) add_to_search_history(s) s end

This code is not called explicitly in a controller – instead, it is run as a before_filter prior to the execution of any controller that includes it. This is mentioned in the comments at the top of the search_context.rb file.

This works for storing the query parameters, which are known before the controller is called. However, we won’t know the number of results in the Solr response until after the controller is called, so we’ll need to move this code the code for creating and saving a Search into a controller method.

We can get access to the object holding the solr response in the index method of the catalog controller (under lib/blacklight/catalog.rb in the Blacklight gem).

(@response, @document_list) = get_search_results @filters = params[:f] || []

The get_search_results method in the solr_helper.rb runs a Solr query and returns a SolrResponse object (lib/solr_response.rb). Since this exercise is really about getting familiar with the Solr code base, it’s worth opening these classes and taking a look a how a query is executed and how results are stored.

The solr_response object (stored in @response, above) provides a hash with results data. The number of results is stored under “numFound”. We can now modify the index method to retrieve the number of results associated with a Solr query, add them to the Search object, and save the results.

Here’s the full code (add this to catalog_controller.rb in your local app).

# get search results from the solr index def index (@response, @document_list) = get_search_results @filters = params[:f] || [] params_copy = params.reject { |k,v| blacklisted_search_session_params.include?(k.to_sym) or v.blank? } return if params_copy.reject { |k,v| [:action, :controller].include? k.to_sym }.blank? saved_search = { |x| x.query_params == params_copy }.first s = => params_copy) s.numfound = @response.response["numFound"] add_to_search_history(s) respond_to do |format| format.html { } format.rss { render :layout => false } format.atom { render :layout => false } format.json do render json: render_search_results_as_json end end end

Step 3: Add the number of results to the view

Now that the number of results is available in the Search object, you can easily display them in the index page in the saved_searches or search_history views.

Here’s the snippet for index.html.erb under saved_searches

<table class="table table-striped"> <%- @searches.each do |search| -%> <tr> <td><%= link_to_previous_search(search.query_params) %></td> <td>results: <%= search.numfound %></td> <td><%= button_to t('blacklight.saved_searches.delete'), forget_search_path( %></td> </tr> <%- end -%>

The only change here is the addition of “search.numfound” populated in the controller method above.

You can add the number of results to the search_history similarly.

Step 4: Try it out

You should now be able to run a search, list the search history (or saved searches, depending on what views you modified), and view the number or results associated with each search.

One note – this numresults value won’t automatically update if new material is added to the index, but clicking on the search link would display the larger number of new files. So you could get out of sync here.

Categories: CKM

Working with Blacklight Part 1 – email search history

CKM Blog - Tue, 2014-01-28 11:40

This is the first of a series of posts about configuring and modifying Blacklight at UCSF. It’s less about emailing search history and more about getting familiar with Blacklight by picking something to modify and seeing how it goes…

We are developing a front end for a Solr repository of tobacco industry documents. Blacklight, out of the box, provides a lot of what we’d need. We decided to come up with a business requirement that isn’t currently in Blacklight and see what it’s like working with the code.

We decided to try emailing a list of saved searches. This blog post is a write up of my notes. I’m hoping it will be useful as a tutorial/exercise for developers looking to get up to speed with with working with Blacklight code.

You should be able to start with a clean installation of Blacklight and add the functionality to email search histories from the notes here. A completed version is available on github at

 Step 1: Get a clean installation of blacklight app going

Use the quickstart guide at

(do all of it, including the Jetty Solr part).

Step 2: Configure an SMTP mailer (optional)

This is optional, but I prefer not to use a sytem mailer on my dev machine.

in config/environments/development.rb

# Expands the lines which load the assets config.assets.debug = true config.action_mailer.delivery_method = :smtp config.action_mailer.default_url_options = { host: '' } config.action_mailer.perform_deliveries = true config.action_mailer.smtp_settings = { :address => "", :port => 587, :domain => "localhost:3000", :user_name => "username", :password => "password", :authentication => "plain", :enable_starttls_auto => true }

Test this to be sure it works by creating and emailing a Blacklight bookmark to yourself (the next steps won’t work if this doesn’t work).

Step 3: Add a feature to send an email history through the saved searches page

1) Create and save a few searches

Do a few searches (anything you like), then go to Saved Searches and save a few of them.
You’ll notice that unlike the Bookmarks page, there’s no functionality to email your saved searches yet.

2) Add a button to email saved searches.

First, we need to add an email button to the saved searches page. We’ll piggyback on the email button used for bookmarks.

If you look in your views directory, you won’t see any code in your local app. It is currently stored in the Blacklight gem. Because our customizations are local we (of course) won’t hack the gem directly; we’ll add or override things in our local app.

You can follow this tutorial without looking at the Blacklight gem source directly, but I’d recommend unpacking the gem so that you can look at the code. Do not change the gem code.

We’ll need to add an email button to the Saved Searches page. To do this, we’ll need to both create a new view and override an existing view in the Blacklight gem.

The view code for the main display page for saved searches is in /app/views/saved_searches/index.html

We’ll override this page locally to add the email button. To do this, create a new directory called saved_searches in the views directory and create a file called index.html.erb with this content (modified from the same file in the gem itself):

<div id="content" class="span9"> <h1><%= t('blacklight.saved_searches.title') %></h1> <%- if current_or_guest_user.blank? -%> <h2><%= t('blacklight.saved_searches.need_login') %></h2> <%- elsif @searches.blank? -%> <h2><%= t('blacklight.saved_searches.no_searches') %></h2> <%- else -%> <p> <%= link_to t('blacklight.saved_searches.clear.action_title'), clear_saved_searches_path, :method => :delete, :data => { :confirm => t('blacklight.saved_searches.clear.action_confirm') } %> </p> <h2><%= t('blacklight.saved_searches.list_title') %></h2> <%= render 'search_tools' %> <table class="table table-striped"> <%- @searches.each do |search| -%> <tr> <td><%= link_to_previous_search(search.query_params) %></td> <td><%= button_to t('blacklight.saved_searches.delete'), forget_search_path( %></td> </tr> <%- end -%> </table> <%- end -%> </div>

This will add a search tools (through <%= render ‘search_tools’ %>) to the index page.

The _search_tools.html.erb partial doesn’t exist in the gem. To create one, we’ll copy and modify the _tools.html.erb partial from the gem (used to render the various tools for bookmarks) to create a partial _search_tools.html.erb (also in the saved_searches view folder).

<ul class="bookmarkTools"> <li class="email"> <%= link_to t(''), email_search_path(:id => @searches), {:id => 'emailLink', :class => 'lightboxLink'} %> </li> </ul>

3) Create routes for for the email_search path

This email button links to a new path (email_search_path) that will need routes. Your first instinct as a Rails programmer might be to look into config/routes.rb.  But the Blacklight gem uses a separate class in /lib/blacklight/routes.rb to generate most of the routes.

Instead of manually creating a new route in the config folder, we’ll modify Blacklight’s routes class. There are a few ways to do this. You could override the entire class by creating a routes.rb file under the same directory path in your rails app. For this exercise, we’ll limit our modifications to the method we need to override and put the code in the initializer folder under lib/blacklight.routes.rb. Although we’re only overriding one method, I would recommend taking a look at the full source in the gem to get a better sense of what this class does.

require "#{Blacklight.root}/lib/blacklight/routes.rb" # -*- encoding : utf-8 -*- require 'deprecation' module Blacklight class Routes extend Deprecation protected module RouteSets def saved_searches(_) add_routes do |options| delete "saved_searches/clear", :to => "saved_searches#clear", :as => "clear_saved_searches" get "saved_searches", :to => "saved_searches#index", :as => "saved_searches" put "saved_searches/save/:id", :to => "saved_searches#save", :as => "save_search" delete "saved_searches/forget/:id", :to => "saved_searches#forget", :as => "forget_search" post "saved_searches/forget/:id", :to => "saved_searches#forget" get "saved_searches/email", :to => "saved_searches#email", :as => "email_saved_searches" post "saved_searches/email" end end end include RouteSets end end

4) Add a form to submit the email

Now that the routes are in place, we can create the form needed to submit an email.

In app/views/saved_searches create an email.html.erb view. This is based on the email.html.erb used to email bookmarks (under app/views/catalog in the blacklight gem).

<div class="modal-header"> <button type="button" class="close" data-dismiss="modal" aria-hidden="true">×</button> <h1><%= t('') %></h1> </div> <%= render :partial => 'email_search_form' %>

In the same directory, create a partial to provide the form fields.


<%= form_tag url_for(:controller => "saved_searches", :action => "email"), :id => 'email_search_form', :class => "form-horizontal ajax_form", :method => :post do %> <div class="modal-body"> <%= render :partial=>'/flash_msg' %> <div class="control-group"> <label class="control-label" for="to"> <%= t('') %> </label> <div class="controls"> <%= text_field_tag :to, params[:to] %><br/> </div> </div> <div class="control-group"> <label class="control-label" for="message"> <%= t('') %> </label> <div class="controls"> <%= text_area_tag :message, params[:message] %> </div> </div> </div> <div class="modal-footer"> <button type="submit" class="btn btn-primary"> <%= t('blacklight.sms.form.submit') %></button> </div> <% end %>

5) Add an email_search action to the controller

The partial form invokes a controller action (email) that doesn’t exist yet. We’ll add this next.

The Blacklight gem has a class saved_searches_controller.rb that holds the controller methods for saved_searches. It’s worth taking a look at this controller class in the gem (in lib/blacklight/catalog.rb). We’ll be basing our new controller method on the email_record action that already exists in this catalog controller (also in the gem).

In app/controllers/saved_searches_controller.rb (in your local instance), put:

require "#{Blacklight.root}/app/controllers/saved_searches_controller.rb" # -*- encoding : utf-8 -*- class SavedSearchesController < ApplicationController include Blacklight::Configurable # Email Action (this will render the appropriate view on GET requests and process the form and send the email on POST requests) def email @searches = current_user.searches if and validate_email_params email = SearchMailer.email_search(@searches, {:to => params[:to], :message => params[:message]}, url_options) email.deliver flash[:success] = I18n.t("") respond_to do |format| format.html { redirect_to catalog_path(params['id']) } format.js { render 'email_sent' } end and return end respond_to do |format| format.html format.js { render :layout => false } end end def validate_email_params case when params[:to].blank? flash[:error] = I18n.t('') when !params[:to].match(defined?(Devise) ? Devise.email_regexp : /^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}$/) flash[:error] = I18n.t('', :to => params[:to]) end flash[:error].blank? end end

Here, the email action is grabbing the saved searches from the current_user object and storing them in an array.

@searches = current_user.searches

If the call to this method is POST, this means the form has been submitted, so the method will call a mailer method (email_search, which we still need to write) and pass the @searches array as a parameter.

6) Create the mailer method

Create a new file named search_mailer.rb in the app/mailers directory. This is similar to the record_mailer.rb file in the blacklight gem, adapted for a list of searches rather than bookmarks.

require "#{Blacklight.models_dir}/record_mailer.rb" # -*- encoding : utf-8 -*- # Only works for documents with a #to_marc right now. class RecordMailer < ActionMailer::Base def email_search(searches, details, url_gen_params) subject = I18n.t('blacklight.email_search_subject', :title => ("search results") ) @searches = searches @message = details[:message] @url_gen_params = url_gen_params mail(:from => "", :to => details[:to], :subject => subject) end end

The subject text ( doesn’t exist yet. You can see a full list in the gem under config/locales. We’ll add the new text required for our local app under blacklight.en.yml.

en: blacklight: application_name: 'Blacklight' email_search_subject: 'Your saved search history'

7) Create the mailer view

You will also need a view for this mailer to create the body of the email that will be sent. The view for document emails in the Blacklight gem is in app/views/record_mailer/email_record.html.erb.

We’ll create a similar view for the search history email.

In your local app, create a search_mailer directory in app/views, and create a new view named email_search.text.erb.  (In other words, create  app/views/record_mailer/email_search.text.erb.)

Here are your saved searches sent with the message: <%= @message %> <% @searches.each do |s| %> http://localhost:3000/?<%=(s.query_params).to_query%> <%end%>

Give it a try! You should now be able to email your saved searches through Blacklight.

8) Next steps

As you can see, the email view for search is hacky. You don’t want to hardcode localhost and you should probably exclude the action and controller name in the URL. You might also want to consider moving some of the headers and text to a configuration file. (Check out config/locales/blacklight.en.yml and in the gem for a starting point.)

Categories: CKM
Syndicate content