Skip navigation

Feed aggregator

Working with Blacklight Part 3 – Linking to Your Solr Index

CKM Blog - Tue, 2014-03-11 09:07

We are using Blacklight to provide a search interface for a Solr index.  I expected it to be super straightforward to plug in our Solr index to the Blacklight configuration.  It wasn’t quite the case! Most of the basic features do plugin nicely, but if you use more advanced Solr features (like facet pivot) or if your solrconfig.xml differs from the Blacklight example solrconfig.xml file, then you are out of luck.  There is not currently much documentation to help you out.

SolrConfig.xml – requestDispatcher

After 3.6, Solr ships with <requestDispatcher handleSelect=”false”> in the solrconfig.xml file.  But Blacklight works with <requestDispatcher handleSelect=”true”>, and passes in the parameter qt (request handler) explicitly .  An example of a SOLR request sent by Blacklight looks like this: http://example.com:8983/solr/ltdl3test/select?q=dt:email&qt=document&wt=xml.

/select request handler should not be defined in solrconfig.xml. This allows the request dispatcher to dispatch to the request handler specified in the qt parameter. Blacklight, by default, expects a search and a document request handler (note the absence of /).

We could override the controller code for Blacklight to call our request handlers.  But a simpler solution is to update the solrconfig.xml to follow the Blacklight convention.

The ‘document’ Request Handler and id Passing

Blacklight expects there to be a document request handler defined in the solrconfig.xml file like this:

<!-- for requests to get a single document; use id=666 instead of q=id:666--> <requestHandler name="document" class="solr.SearchHandler"> <lst name="defaults"> <str name="echoParams">all</str> <str name="fl">*</str> <str name="rows">1</str> <str name="q">{!raw f=id v=$id}</str> <!-- use id=666 instead of q=id:666 --> </lst> </requestHandler>

As the comment says, Blacklight will pass in the request to SOLR in the format of id=666 instead of q=id:666.  It achieves this by using the SOLR raw query parser.  However, this only works if your unique id is a String.  In our case, the unique id is a long and passing in id=666 does not return anything in the SOLR response.

There are two ways to solve this issue.  The first is to rebuild the index and change the id type from long to String.  The other is to override solr_helper.rb to pass in q=id:xxx instead of id=xxx.  And the code snippet is below.

require "#{Blacklight.root}/lib/blacklight/solr_helper.rb" module Blacklight::SolrHelper extend ActiveSupport::Concern # returns a params hash for finding a single solr document (CatalogController #show action) # If the id arg is nil, then the value is fetched from params[:id] # This method is primary called by the get_solr_response_for_doc_id method. def solr_doc_params(id=nil) id ||= params[:id] p = blacklight_config.default_document_solr_params.merge({ #:id => id # this assumes the document request handler will map the 'id' param to the unique key field :q => "id:" + id.to_s }) p[:qt] ||= 'document' p end end Getting Facet Pivot to work

In our index, we have a top-level facet called industry and a child facet called source that should be displayed in a hierarchical tree.    It should look something like:

The correct configuration is in the code snippet below.

#Industry config.add_facet_field 'industry', :label => 'Industry', :show => false # Source config.add_facet_field 'source_facet', :label => 'Source', :show => false #Industry -> Source config.add_facet_field 'industry_source_pivot_field', :label => 'Industry/Source', :pivot => ['industry', 'source_facet']

You must add the two base fields  (Industry and Source) to the catalog_controller.rb file and set :show => false if they should not be displayed.  And it usually is the case since the data is already displayed in the pivot tree.  The current documentation on Blacklight facet pivot support makes it seem like only the last line is needed.  But if only the last line is defined, then the facet pivot will render correctly in the refine panel and it makes you think that facet pivot is working OK. But when you click on the facet, you will get an error, “undefined method ‘label’ for nil:NilClass”:

Categories: CKM

ChemSpider: A Free Source of Online Chemistry Information

In Plain Sight - Mon, 2014-03-10 13:51

Looking for chemistry information? ChemSpider, from the UK’s Royal Society of Chemistry (RSC), is a free online chemical database offering access to information on almost 25 million unique chemical compounds. Data is obtained from over 400 online sources. ChemSpider  is more than a database, however,  as it asks chemists to participate in data enhancement and curation.

 

 

 

 

 

 

 

Categories: In Plain Sight

New Additions to the Eric Berne Collections

Brought to Light Blog - Fri, 2014-03-07 13:12

The Eric L. Berne Collection grew by another 8.5 linear feet a few weeks ago, when additional records arrived at Special Collections. The International Transactional Analysis Association (ITAA) and the Berne family have generously placed a large collection of Eric Berne’s early papers and educational records on deposit with UCSF for public research and use. The ITAA has also donated a collection of audio recordings of Berne’s Transactional Analysis lectures and of San Francisco Social Psychiatry Seminar meetings (1963-1970). This new accession in particular documents Berne’s medical school education at McGill University in Montreal and his early career as a psychiatrist. It also includes more of his professional and creative writings in several languages, and contains fascinating ephemera from his frequent research trips around the world.

Berne’s ticket to travel in Turkey, 1938

Photograph page of Berne’s ticket to travel in Turkey, 1938

Three-dimensional objects are represented as well, such as an original version of the board game based on Berne’s bestselling book Games People Play.

This collection will be processed in the next several weeks and linked to other rich materials in the related Berne collections. Online finding aids to these materials are coming soon.

 

Categories: Brought to Light

Questionnaires, Choices, and Surveys! Oh, My!

Convergence - Thu, 2014-03-06 16:21

Do you find yourself confused about which data-gathering tool to use in your CLE course? CLE users have a number of activities for data collection to choose from (4 in total!), and some may argue that there are too many.

The Learning Technologies Group (LTG) is often asked the question, “Should I use the Questionnaire or Survey?” or “I want to collect data from people not enrolled in my CLE course, which activity should I use?” A good start is to read the descriptions for the four CLE data-gathering tools listed below:

  1. Choice: Use this activity to ask a single question and offer a number of possible responses. Choice results may be published after students have answered, after a certain date, or not at all. Results may be published with student names or anonymously. The Choice is a non-graded activity. Example: Sign up for project teams.
  2. Feedback: Add the Feedback activity to your course to create a custom survey using a variety of question types including multiple choice, yes/no or text input. Feedback responses may be anonymous if desired, and results may be shown to all participants or instructors only. The Feedback activity may also be completed by guests not logged into the CLE. Example: Course evaluation.
  3. Questionnaire: A Questionnaire allows you to construct surveys using a variety of question types. Unlike the Feedback activity, Questionnaires can be graded and cannot be completed by guests not logged into the CLE. Example: Graded project evaluation
  4. Survey: The Survey provides a number of verified survey instruments that have been found useful in assessing and stimulating learning in online environments. The Survey activity comes pre-populated with questions. Faculty who wish to create their own survey should use the Feedback activity. Example: Research project.

Does this sound confusing or overwhelming? No need to worry – LTG is here to help! We have created a breakdown of the CLE data-gathering tools in the matrix provided below. The matrix includes descriptions for each activity, a summary of the functionality, support resources and examples. We also included Qualtrics in the matrix to compare the online survey software with the four CLE data-gathering tools. Read more about using Qualtrics from UCSF Information Technology.

Click here to view pdf with active hyperlinks.

If you still feel overwhelmed with the number of options for data gathering in the CLE, you may not have to wait long for a solution. With the upcoming release of Moodle 2.7 later this year, the Survey, Feedback and Questionnaire activities are scheduled to be consolidated into a new Survey 2 activity. Read more about the new activity and view the roadmap for development on Moodle.org.

Are you currently using the Choice, Feedback, Questionnaire or Survey in your CLE course? If so, please leave a comment below to share your experience with data collection in the CLE.

In the meantime, please feel free to contact the Learning Technologies Group with any CLE-related questions!

Resources:

  • Moodle.org
  • Wodonga TAFE
  • ItsThatPhotoGuy
  • EducationPublic
  • Sean McClelland

Image Credit:

 

Categories: Convergence

Improving Code Quality: Together. Like We Used To. Like a Family.

CKM Blog - Tue, 2014-03-04 14:44

We had a day-long session of hacking trying to improve the code quality and test coverage of Ilios.

This post is clearly not a step-by-step instruction manual on transforming an intimidatingly large pile of spaghetti code into a software engineering masterpiece. I hope video is posted soon of Dan Gribbin’s jQuery Conference presentation from last month to see what wisdom and techniques I can steal acquire.

In the meantime, here are a few small things we learned doing the first session.

  1. Give everyone concrete instructions ahead of time regarding how to get the app set up and how to get all the existing tests running. Have new folks arrive early for a get-setup session. This allows new folks to hit the ground running and let’s experienced folks begin coding or help people with interesting problems, rather than help people just get started.
  2. Decide on a focus ahead of time. Try to fix one class of bugs, write one type of test, complete coverage for one large component, or whatever. This allows for more collaboration as people are working on similar things.
  3. Do it often or you lose momentum! I suspect that weekly is too often. We’re trying once every two-to-three weeks.

P. S. If you recognized the reference in this post’s title, then clearly you have all the skills required to work here. We’re hiring a Front-End Engineer and the job is so good that HR pasted the description twice into the ad. Submit your résumé and knock our socks off.

Categories: CKM

UCSF Archives Acquires Laurie Garrett Papers

Brought to Light Blog - Fri, 2014-02-28 11:10

Laurie Garrett, Pulitzer Prize-winning science journalist and researcher, recently donated her papers to the UCSF Archives and Special Collections. Ms. Garrett is the only writer to have been awarded all three of the Big “Ps” of journalism: the Peabody, the Polk, and the Pulitzer.

Ms. Garrett is the best-selling author of The Coming Plague: Newly Emerging Diseases in a World Out of Balance (1994) and Betrayal of Trust: The Collapse of Global Public Health (2000). Her latest book is I Heard the Sirens Scream: How Americans Responded to the 9/11 and Anthrax Attacks (2011). She graduated with honors in biology from the University of California, Santa Cruz, attended graduate school in the Department of Bacteriology and Immunology at the University of California, Berkeley, and did laboratory research at Stanford University with Dr. Leonard Herzenberg. During her PhD studies, Ms. Garrett started reporting on science news at KPFA, a local radio station. This hobby soon became far more interesting than graduate school, and she took a leave of absence to explore journalism. In 1980, she joined National Public Radio, working out of the network’s bureaus in San Francisco and, later, Los Angeles as a science correspondent. In 1988, Ms. Garrett left NPR to join the science writing staff of Newsday. Her Newsday reporting has earned several awards: Award of Excellence from the National Association of Black Journalists (for “AIDS in Africa,” 1989), First Place from the Society of Silurians (for “Breast Cancer,” 1994), and the Bob Considine Award of the Overseas Press Club of America (for “AIDS in India,” 1995). Since 2004, Laurie Garrett has been a senior fellow for global health at the Council on Foreign Relations in New York. Ms. Garrett was awarded doctorates Honoris Causa by three universities: Illinois Wesleyan University, the University of Massachusetts Lowell, and Georgetown University.

The Laurie Garrett papers consist predominantly of the research files used by Ms. Garrett in the writing of her two books, The Coming Plague and Betrayal of Trust. They contain numerous drafts and published newspaper and magazine articles, including her Pulitzer Prize-winning 1996 series printed in Newsday, chronicling the Ebola virus outbreak in Zaire. Also included are a series of 25 articles, “Crumbled Empire, Shattered Health,” on the AIDS epidemic and public health crisis in the former Soviet Union that received the George Polk Award for Foreign Reporting in 1997. This collection encompasses a wealth of primary resources consisting of correspondence, interviews, photographs, and ephemera, including HIV/AIDS-related posters from around the world. A sizable part of the collection includes research materials, interviews and notebooks (that will be transferred to the archives at a later date) from the time when Laurie Garrett was a science correspondent for NPR, Newsday, and wrote for the Washington Post and the Los Angeles Times, among many other publications.

These papers also contain secondary source materials such as complete publishing runs of AIDS Weekly, AIDS Newsletter, and AIDS Treatment News, scholarly papers, conference abstracts, reports, and promotional materials.

This sizable collection consisting of more than 150 linear feet spans from the mid-1970s to 2013. It documents a broad array of subjects related to global health, newly emerging and re-emerging diseases –primarily the HIV/AIDS epidemic, SARS, avian flu, Ebola, Anthrax, and influenza – and their effects on foreign policy, national security, and bio-terrorism.

Laurie Garrett gave a lecture at Toland Hall on UCSF campus on February 21, 2014

The Laurie Garrett papers are a major acquisition for the UCSF archives and it will enhance several existing areas of collecting, in particular history of HIV/AIDS epidemic, infectious and chronic diseases, and global and public health. UCSF is considered one of the preeminent repositories of AIDS-related materials and Ms. Garrett’s collection complements papers from the AIDS History project that began in 1987 as a joint effort of historians, archivists, AIDS activists, health care providers, and others to secure historically significant resources about the response to the AIDS crisis in San Francisco. For more than thirty years since Ms. Garrett started covering the outbreak in San Francisco (even before it became publicly known that a virus was responsible), she has been collecting materials on the evolution of the HIV pandemic. Her vast and comprehensive files contain information on many topics including the social origins and history of HIV/AIDS; HIV drugs; President Reagan’s and Clinton’s AIDS policies as well as detailed HIV/AIDS information on many different countries. Her extensive writings and files on the subject of public health mesh well with the materials from the Philip Randolph Lee and Harold S. Luft papers already preserved in the UCSF archives.

The availability of these materials for research will help advance the study and teaching of the health sciences, and allow further analysis of how medical discoveries were presented and described to a broader audience. The papers of Ms. Garrett, a gifted and internationally recognized author and investigative reporter, will serve as a source of inspiration for novice and experienced medical and science writers and journalists.

The Laurie Garrett papers were officially unveiled during the special presentation she gave at UCSF on February 21, 2014.

For more information, or if you have questions on how to access this collection, please contact Polina Ilieva: polina.ilieva@ucsf.edu.

Categories: Brought to Light

DASHM adopts Jacob Bigelow’s American Medical Botany

Brought to Light Blog - Wed, 2014-02-26 11:25

We’re pleased to announce that two of our books have been adopted!

The UCSF Department of Anthropology, History, & Social Medicine has chosen to conserve American Medical Botany. Read more about the book from the perspective of Sarah Robertson, a PhD student in the department.

Additionally, the always supportive Bay Area History of Medicine Society has graciously taken De humani corporis fabrica libri septem under its wing.

Categories: Brought to Light

Running Behat Tests via Sauce Labs on Travis-CI

CKM Blog - Mon, 2014-02-24 10:04

We use Behat for testing the Ilios code base. (We also use PHPUnit and Jasmine.) We started out using Cucumber but switched to Behat on the grounds that we should use PHP tools with our PHP project. Our thinking was that someone needing to dig in deep to write step code shouldn’t have to learn Ruby when the rest of the code was PHP.

We use Travis for continuous integration. Naturally, we needed to get our Behat tests running on Travis. Fortunately, there is already a great tutorial from about a year ago explaining how to do this.

Now let’s say you want to take things a step further. Let’s say you want your Behat tests to run on a variety of browsers and operating systems, not just whatever you can pull together on the Linux host running your Travis tests. One possibility is Sauce Labs, which is free for open source projects like Ilios.

Secure Environment Variables

Use the travis Ruby gem to generate secure environment variable values for your .travis.yml file containing your SAUCE_USERNAME and your SAUCE_ACCESS_KEY. See the heplful Travis documentation for more information.

Sauce Connect

You may be tempted to use the Travis addon for Sauce Connect. I don’t because, using the addon, Travis builds hang (and thus fail) when running the CI tests in a fork. This is because forks cannot read the secure environment variables generated in the previous step.

Instead, I check to see if SAUCE_USERNAME is available and, if so, then I run Sauce Connect using the same online bash script (located in a GitHub gist) used by the addon provided by Travis. (By the way, you can check for TRAVIS_SECURE_ENV_VARS if that feels better than checking for SAUCE_USERNAME.)

The specific line in .travis.yml that does this is:

- if [ "$SAUCE_USERNAME" ] ; then (curl -L https://gist.github.com/santiycr/5139565/raw/sauce_connect_setup.sh | bash); fi Use the Source, Luke

Now it’s time to get Behat/Mink to play nicely with Sauce Labs.

The good news is that there is a saucelabs configuration option. The bad news is that, as far as I can tell, it is not documented at the current time. So you may need to read the source code if you want to find out about configuration options or troubleshoot. Perhaps it’s intended to be released and documented in the next major release. Regardless, we’re using it and it’s working for us. Enable it in your behat.yml file:

default: extensions: Behat\MinkExtension\Extension: saucelabs: ~ Special Sauce

We keep our Behat profile for Sauce in it’s own file, because it’s special. Here’s our sauce.yml file:

# Use this profile to run tests on Sauce against the Travis-CI instance default: context: class: "FeatureContext" extensions: Behat\MinkExtension\Extension: base_url: https://localhost default_session: saucelabs javascript_session: saucelabs saucelabs: browser: "firefox" capabilities: platform: "Windows 7" version: 26

Note that we configured our app within Travis-CI to run over HTTPS. In a typical setting, you will want the protocol of your base_url to specify HTTP instead.

Here’s the line in our .travis.yml to run our Behat tests using the Sauce profile:

- if [ "$SAUCE_USERNAME" ] ; then (cd tests/behat && bin/behat -c sauce.yml); fi

Of course, if you’re using a different directory structure, you will need to adjust the command to reflect it.

That’s All, Folks!

I hope this has been helpful. It will no doubt be out of date within a few months, as things move quickly with Behat/Mink, Sauce Labs, and Travis-CI. I will try to keep it up to date and put a change log here at the bottom. Or if a better resource for this information pops up, I’ll just put a link at the top. Thank you for reading!

Categories: CKM

PubMed Commons: A System for Commenting on Articles in PubMed

In Plain Sight - Wed, 2014-02-19 17:21

Last  October the NCBI launched the pilot phase of a program called PubMed Commons, designed to allow users to comment on published abstracts on the PubMed website.

PubMed Commons enables authors to share opinions and information about scientific publications in PubMed. All authors of publications in PubMed are eligible to become members. Members play a pivotal role in ensuring that PubMed Commons remains a forum for open constructive criticism and discussion of scientific issues. They can comment on any publication in PubMed, rate the helpfulness of comments, and invite other eligible authors to join.

More information.

Categories: In Plain Sight

Redesigning the Legacy Tobacco Documents Library Site Part 1 — User Research

CKM Blog - Wed, 2014-02-19 11:21

The Legacy Tobacco Documents Library site (LTDL) is undergoing a user-centered redesign.  A user-centered design process (a key feature of user experience, or UX) is pretty much what it sounds like: every decision about how the site will work starts from the point of view of the target user’s needs.

As a UX designer, my job begins with user research to identify the target users, and engaging with these users to identify their actual needs (versus what we might assume they want).

Prior to my arrival, the LTDL team had already identified three target users: the novice (a newbie with little or no experience searching our site), the motivated user (someone who has not been trained in how to search our site, but is determined to dig in and get what they need. Unlike the novice, the motivated user won’t abandon their search efforts), and the super user (someone who has gone through LTDL search training and knows how to construct complex search queries).

Given this head start, I spent a few weeks conducting extensive user research with a handful of volunteers representing all three user types.  I used a combination of hands-off observation, casual interviews, and user testing of the existing site to discover:

    • what the user expects from the LTDL search experience
    • what they actually need to feel successful in their search efforts
    • what they like about the current site
    • what they’d like to change about the current site

Lessons learned will guide my design decisions for the rest of the process.  Below you’ll find excerpts from the User Research Overview presentation I delivered to my team:


In addition to engaging directly with users, I did a deep dive into the site analytics.  The data revealed the surprising statistic that most of the LTDL site traffic (75%) originated from external search engines like Google.  The data further revealed that once these users got to our site, they were plugging in broad search terms (like tobacco or cancer) that were guaranteed to return an overwhelming number of results.  This meant that most of our users were novices and motivated users, not the super users we were used to thinking about and catering to.

This information exposed the key problem to be solved with the LTDL redesign: how to build an easy-to-use search engine that teaches the user how to return quality results, without dumbing down the experience for our super users.

Categories: CKM

UCSF Archives Lecture Series presents Laurie Garrett, February 21, 2014

Brought to Light Blog - Tue, 2014-02-18 11:01

Join us on Friday, February 21st as Laurie Garrett, Pulitzer Prize-winning science journalist and researcher, gives a special presentation at UCSF. This is the inaugural lecture in a new series from UCSF Archives & Special Collections.

Lecture: Tracking Disease, Forecasting Futures
Presenter: Laurie Garrett, Senior Fellow for Global Health, Council on Foreign Relations
Date: Friday, February 21, 2014
Time: 10:00 am – 11:15 am
Location: Toland Hall Auditorium (U142), University Hall, 533 Parnassus, 1st floor
This lecture is free and open to the public.

Laurie Garrett

About Laurie Garrett
As a medical and science writer for Newsday in New York City, Laurie Garrett became the only writer ever to have been awarded all three of the Big “Ps” of journalism: The Peabody, The Polk (twice), and The Pulitzer. Laurie is also the best-selling author of The Coming Plague: Newly Emerging Diseases in a World Out of Balance and Betrayal of Trust: The Collapse of Global Public Health. In March 2004, Laurie took the position of Senior Fellow for Global Health at the Council on Foreign Relations. She is an expert on global health with a particular focus on newly emerging and re-emerging diseases; public health and their effects on foreign policy and national security. Learn more.

About the UCSF Archives & Special Collections Lecture Series
UCSF Archives & Special Collections launched this lecture series to introduce a wider community to treasures and collections from its holdings, to provide an opportunity for researchers to discuss how they use this material, and to celebrate clinicians, scientists, and health care professionals who donated their papers to the archives.

The second lecture “Remembering the first years of AIDS epidemic” is scheduled for Wednesday, April 16th  from 12 pm-1 pm at the Lange room in the Library and will feature Drs. Volberding, Cooke, Greenspan, Abrams.

Categories: Brought to Light

Working with Blacklight Part 2 – displaying the result count

CKM Blog - Tue, 2014-02-11 11:26

This is the second in of a series of posts about customizing Blacklight.

Last time, we implemented a feature that emailed a list of saved searches. We’d also like to display the number of results retrieved by each search. This task is a good way to learn about how a Solr response is stored and processed in Blacklight.

You can either start from a clean installation of Blacklight or build on the results of the previous exercise. A completed version is available on GitHub at https://github.com/ucsf-ckm/blacklight-email-tutorial.

Step 1: Add a “numresults” attribute to the Search model

Search history and saved searches are stored in an array of Search objects. The Search model in Blacklight holds the query_params for a search but doesn’t store the number of results. We’ll add an attribute, “numresults”, to store this value.

There are a few ways to do this in Rails – here, we’ll go with a migration.

rails g migration add_numresults_to_search numresults:integer

This should produce a new migration

class AddNumfoundToSearch < ActiveRecord::Migration def change add_column :searches, :numfound, :integer end end

.. and run the migration

rake db:migrate

You may want to inspect the new schema or object to make sure that the model has been modified properly.

Step 1: Retrieve the number of results and store them in the Search object

Searches are created and stored in the search_context.rb class in the Blacklight gem (under lib/assets/blacklight/catalog/search_context.rb).

saved_search ||= begin s = Search.create(:query_params => params_copy) add_to_search_history(s) s end

This code is not called explicitly in a controller – instead, it is run as a before_filter prior to the execution of any controller that includes it. This is mentioned in the comments at the top of the search_context.rb file.

This works for storing the query parameters, which are known before the controller is called. However, we won’t know the number of results in the Solr response until after the controller is called, so we’ll need to move this code the code for creating and saving a Search into a controller method.

We can get access to the object holding the solr response in the index method of the catalog controller (under lib/blacklight/catalog.rb in the Blacklight gem).

(@response, @document_list) = get_search_results @filters = params[:f] || []

The get_search_results method in the solr_helper.rb runs a Solr query and returns a SolrResponse object (lib/solr_response.rb). Since this exercise is really about getting familiar with the Solr code base, it’s worth opening these classes and taking a look a how a query is executed and how results are stored.

The solr_response object (stored in @response, above) provides a hash with results data. The number of results is stored under “numFound”. We can now modify the index method to retrieve the number of results associated with a Solr query, add them to the Search object, and save the results.

Here’s the full code (add this to catalog_controller.rb in your local app).

# get search results from the solr index def index (@response, @document_list) = get_search_results @filters = params[:f] || [] params_copy = params.reject { |k,v| blacklisted_search_session_params.include?(k.to_sym) or v.blank? } return if params_copy.reject { |k,v| [:action, :controller].include? k.to_sym }.blank? saved_search = searches_from_history.select { |x| x.query_params == params_copy }.first s = Search.new(:query_params => params_copy) s.numfound = @response.response["numFound"] s.save add_to_search_history(s) respond_to do |format| format.html { } format.rss { render :layout => false } format.atom { render :layout => false } format.json do render json: render_search_results_as_json end end end

Step 3: Add the number of results to the view

Now that the number of results is available in the Search object, you can easily display them in the index page in the saved_searches or search_history views.

Here’s the snippet for index.html.erb under saved_searches

<table class="table table-striped"> <%- @searches.each do |search| -%> <tr> <td><%= link_to_previous_search(search.query_params) %></td> <td>results: <%= search.numfound %></td> <td><%= button_to t('blacklight.saved_searches.delete'), forget_search_path(search.id) %></td> </tr> <%- end -%>

The only change here is the addition of “search.numfound” populated in the controller method above.

You can add the number of results to the search_history similarly.

Step 4: Try it out

You should now be able to run a search, list the search history (or saved searches, depending on what views you modified), and view the number or results associated with each search.

One note – this numresults value won’t automatically update if new material is added to the index, but clicking on the search link would display the larger number of new files. So you could get out of sync here.

Categories: CKM

UCSF on Historypin

Brought to Light Blog - Tue, 2014-02-11 09:56

Historypin is a website that allows users to view and post historical photos that have been digitally “pinned” to a map– thereby highlighting the location which may be unrecognizable in the photo. It allows photographs to be searched by place, time, or channel– channels are accounts that have been set up by various people and organizations.

We created our channel on Historypin– UCSF Archives & Special Collections– in part to begin celebrating the 150th anniversary of UCSF! Toland Medical College began in 1864 in the heart of San Francisco’s North Beach neighborhood, moved to the wide-open countryside of the Parnassus/Inner Sunset area, and has continued to change and grow.

We will continue to add images and information throughout the coming year. Check back often for new and interesting images of the ever-evolving UCSF campus. We encourage you to add comments or information to our pinned images!

One of the niftiest features of Historypin is the ability to pin an image directly onto street view. If the photograph was taken from the street (or similar angle and location), it can be placed on the map over the Google street view image of the image’s location– just like the image of Market Street Earthquake Damage, 1906 shown below. You can toggle the fade slide bar to play with photograph’s opaqueness.

For more detailed information on the history of UCSF, please see A History of UCSF.

Categories: Brought to Light

Articulate Help Now Available!

Convergence - Wed, 2014-02-05 13:27

Are questions like how can I successfully record narration onto my PowerPoint presentation? or how do I change the settings on my Articulate presentation to ensure the best user experience? keeping you up at night and disrupting your sleep patterns? What about how do I go about uploading an Articulate presentation to the CLE for students to view and engage with?

Your sleep is precious; don’t let questions like these interrupt your REM cycle! Check out the new Articulate resources on the Multimedia Support Center hosted by the Learning Technologies Group!

And remember, you can always find answers to your questions about the UCSF CLE on the CLE Support Center hosted by the Learning Technologies Group!

Image Credit: James Stone

Categories: Convergence

It Takes a Village: Building the NeuroExam Tutor App

Mobilized - Wed, 2014-01-29 18:12

The UCSF NeuroExam Tutor app seeks to solve a problem that has faced medical educators for decades: medical students are uncertain and timid when performing the neurological exam. Educators suppose that this is because of the complexity of the nervous system and the multitude of ways to investigate its functions. However, it is even more troubling that this insecurity continues into the careers of clinicians from most specialties. To address this problem, UCSF neurologists Susannah Cornes and Vanja Douglas proposed a gentle introduction to the neurological exam over the four years of medical school. This innovative approach could not have been realized without the partnerships that lead to the creation of an iPad app.

Features:

  • More than 60 high quality videos
  • In-depth descriptions of how to execute more than 50 different physical exam maneuvers
  • 6 interactive cases with real patient videos
  • Descriptions of 8 exam categories with explanations of terminology and grading scales
  • Quick reference flashcards for 6 common neurological complaints
  • Pearls and pitfalls from the master clinicians at UCSF

While many projects in medical education are carried out by a single motivated educator, increasingly, ideas cannot reach their fullest potential without a team. The NeuroExam Tutor team consisted of several doctors, myself lending the perspective of a medical student, and and the Technology Enhanced Learning team in the UCSF School of Medicine Office of Medical Education. The app team became truly inter-professional when we partnered with Bandwdth, a digital publishing firm with experience creating rich multimedia driven apps. Throughout the process, specialists in educational theory, interface design, videography, and programming were all tapped to make the multimedia NeuroExam Tutor app a reality. This partnership was productive, exciting, and drastically different from most collaborative efforts within the hospital.

He who studies medicine without books sails an uncharted sea, but he who studies medicine without patients does not go to sea at all. – William Osler

Our experience building this app highlights the fact that medical education is in a time of transition driven by the rising tide of technology and the availability of information. The “books” to which Osler refers are no longer just leather-bound tomes filled with yellowed pages. Today’s medical student is constantly bombarded by websites, apps, feeds and notifications that are the books of our age. Sounds, videos, and interactive problem solving activities promise to develop skills, as well as knowledge, as they guide students in the hospital and clinic. In developing the NeuroExam Tutor app, our aim was to create a resource that fulfills the role of the Osler’s books without forgetting that the ultimate goal is to improve the quality of patients’ lives.

I believe that the multi-disciplinary skills of the people involved in this project allowed us to tell the patient stories in a more engaging way. Students learn directly from the patients couched in those stories, and as a result, we capture some of the spirit of patient interaction and presence that Osler holds to be so fundamental.

In closing, I’d like to note that the NeuroExam Tutor project could not have achieved the goal of educating students while maintaining the primacy of the patient experience anywhere but UCSF. UCSF is a unique institution, insomuch as it embodies a culture of caring and respect for the patient experience, as well as an emphasis on fundamental knowledge and treatment. As medical education transitions to a curriculum that increasingly relies on technologically enhanced resources, UCSF is uniquely poised to imbue those resources with a human touch.

Related posts:

  1. iPads in the Lab: interview with UCSF’s Chandler Mayfield CHANDLER H. MAYFIELD is the Director for Technology Enhanced Learning...
  2. Inkling: iPad Interactive Textbooks With all the e-readers and tablets out there, e-books are...
  3. Apple Tackles Textbooks Apple made some big announcements yesterday at its Education Event...
Categories: Mobilized

Adopt-a-Book: Help Us Restore and Preserve 150 Rare Books

Brought to Light Blog - Tue, 2014-01-28 16:22

UCSF’s Rare Book Collection contains more than 15,000 volumes, including items from the 15th century, collected over the past 150 years through donations and gifts from faculty, alumni, and friends of the Library. Over time, many books have deteriorated so that additional use would add further damage to their condition. As a busy research library, it is important that we keep these materials accessible to present and future researchers.

A conservator at the UC Berkeley Library conservation lab.

In honor of UCSF’s 150th Anniversary, UCSF Archives & Special Collections has launched the Adopt-a-Book program, which aims to fund the restoration of 150 books that were published before 1864, the year that UCSF was founded. Your generous donations will support the work of conservators that will stabilize the books and prevent future damage, in addition to paper restoration, cleaning, and some cosmetic treatment.

Bartolomeo Eustachi; Bernardi Siegfried Albini, Explicatio tabularum anatomicarum, 1744. One of the books that is featured on the Adopt-a-Book website.

The Library is very grateful to the members of the Bay Area History of Medicine Society, who have already donated money to restore a copy of De Humani Corporis Fabrica, 2nd edition (1555), written by Andreas Vesalius.

Interested in adopting a book from this exceptional collection? Learn more about the Adopt-a-Book program. We sincerely appreciate your generosity and continued support!

Categories: Brought to Light

Working with Blacklight Part 1 – email search history

CKM Blog - Tue, 2014-01-28 11:40

This is the first of a series of posts about configuring and modifying Blacklight at UCSF. It’s less about emailing search history and more about getting familiar with Blacklight by picking something to modify and seeing how it goes…

We are developing a front end for a Solr repository of tobacco industry documents. Blacklight, out of the box, provides a lot of what we’d need. We decided to come up with a business requirement that isn’t currently in Blacklight and see what it’s like working with the code.

We decided to try emailing a list of saved searches. This blog post is a write up of my notes. I’m hoping it will be useful as a tutorial/exercise for developers looking to get up to speed with with working with Blacklight code.

You should be able to start with a clean installation of Blacklight and add the functionality to email search histories from the notes here. A completed version is available on github at https://github.com/ucsf-ckm/blacklight-email-tutorial.

 Step 1: Get a clean installation of blacklight app going

Use the quickstart guide at
https://github.com/projectblacklight/blacklight/wiki/Quickstart

(do all of it, including the Jetty Solr part).

Step 2: Configure an SMTP mailer (optional)

This is optional, but I prefer not to use a sytem mailer on my dev machine.

in config/environments/development.rb

# Expands the lines which load the assets config.assets.debug = true config.action_mailer.delivery_method = :smtp config.action_mailer.default_url_options = { host: 'myhost.com' } config.action_mailer.perform_deliveries = true config.action_mailer.smtp_settings = { :address => "smtp.gmail.com", :port => 587, :domain => "localhost:3000", :user_name => "username", :password => "password", :authentication => "plain", :enable_starttls_auto => true }

Test this to be sure it works by creating and emailing a Blacklight bookmark to yourself (the next steps won’t work if this doesn’t work).

Step 3: Add a feature to send an email history through the saved searches page

1) Create and save a few searches

Do a few searches (anything you like), then go to Saved Searches and save a few of them.
You’ll notice that unlike the Bookmarks page, there’s no functionality to email your saved searches yet.

2) Add a button to email saved searches.

First, we need to add an email button to the saved searches page. We’ll piggyback on the email button used for bookmarks.

If you look in your views directory, you won’t see any code in your local app. It is currently stored in the Blacklight gem. Because our customizations are local we (of course) won’t hack the gem directly; we’ll add or override things in our local app.

You can follow this tutorial without looking at the Blacklight gem source directly, but I’d recommend unpacking the gem so that you can look at the code. Do not change the gem code.

We’ll need to add an email button to the Saved Searches page. To do this, we’ll need to both create a new view and override an existing view in the Blacklight gem.

The view code for the main display page for saved searches is in /app/views/saved_searches/index.html

We’ll override this page locally to add the email button. To do this, create a new directory called saved_searches in the views directory and create a file called index.html.erb with this content (modified from the same file in the gem itself):

<div id="content" class="span9"> <h1><%= t('blacklight.saved_searches.title') %></h1> <%- if current_or_guest_user.blank? -%> <h2><%= t('blacklight.saved_searches.need_login') %></h2> <%- elsif @searches.blank? -%> <h2><%= t('blacklight.saved_searches.no_searches') %></h2> <%- else -%> <p> <%= link_to t('blacklight.saved_searches.clear.action_title'), clear_saved_searches_path, :method => :delete, :data => { :confirm => t('blacklight.saved_searches.clear.action_confirm') } %> </p> <h2><%= t('blacklight.saved_searches.list_title') %></h2> <%= render 'search_tools' %> <table class="table table-striped"> <%- @searches.each do |search| -%> <tr> <td><%= link_to_previous_search(search.query_params) %></td> <td><%= button_to t('blacklight.saved_searches.delete'), forget_search_path(search.id) %></td> </tr> <%- end -%> </table> <%- end -%> </div>

This will add a search tools (through <%= render ‘search_tools’ %>) to the index page.

The _search_tools.html.erb partial doesn’t exist in the gem. To create one, we’ll copy and modify the _tools.html.erb partial from the gem (used to render the various tools for bookmarks) to create a partial _search_tools.html.erb (also in the saved_searches view folder).

<ul class="bookmarkTools"> <li class="email"> <%= link_to t('blacklight.tools.email'), email_search_path(:id => @searches), {:id => 'emailLink', :class => 'lightboxLink'} %> </li> </ul>

3) Create routes for for the email_search path

This email button links to a new path (email_search_path) that will need routes. Your first instinct as a Rails programmer might be to look into config/routes.rb.  But the Blacklight gem uses a separate class in /lib/blacklight/routes.rb to generate most of the routes.

Instead of manually creating a new route in the config folder, we’ll modify Blacklight’s routes class. There are a few ways to do this. You could override the entire class by creating a routes.rb file under the same directory path in your rails app. For this exercise, we’ll limit our modifications to the method we need to override and put the code in the initializer folder under lib/blacklight.routes.rb. Although we’re only overriding one method, I would recommend taking a look at the full source in the gem to get a better sense of what this class does.

require "#{Blacklight.root}/lib/blacklight/routes.rb" # -*- encoding : utf-8 -*- require 'deprecation' module Blacklight class Routes extend Deprecation protected module RouteSets def saved_searches(_) add_routes do |options| delete "saved_searches/clear", :to => "saved_searches#clear", :as => "clear_saved_searches" get "saved_searches", :to => "saved_searches#index", :as => "saved_searches" put "saved_searches/save/:id", :to => "saved_searches#save", :as => "save_search" delete "saved_searches/forget/:id", :to => "saved_searches#forget", :as => "forget_search" post "saved_searches/forget/:id", :to => "saved_searches#forget" get "saved_searches/email", :to => "saved_searches#email", :as => "email_saved_searches" post "saved_searches/email" end end end include RouteSets end end

4) Add a form to submit the email

Now that the routes are in place, we can create the form needed to submit an email.

In app/views/saved_searches create an email.html.erb view. This is based on the email.html.erb used to email bookmarks (under app/views/catalog in the blacklight gem).

<div class="modal-header"> <button type="button" class="close" data-dismiss="modal" aria-hidden="true">×</button> <h1><%= t('blacklight.email.form.title') %></h1> </div> <%= render :partial => 'email_search_form' %>

In the same directory, create a partial to provide the form fields.

_email_search_form.html.erb

<%= form_tag url_for(:controller => "saved_searches", :action => "email"), :id => 'email_search_form', :class => "form-horizontal ajax_form", :method => :post do %> <div class="modal-body"> <%= render :partial=>'/flash_msg' %> <div class="control-group"> <label class="control-label" for="to"> <%= t('blacklight.email.form.send_to') %> </label> <div class="controls"> <%= text_field_tag :to, params[:to] %><br/> </div> </div> <div class="control-group"> <label class="control-label" for="message"> <%= t('blacklight.email.form.message') %> </label> <div class="controls"> <%= text_area_tag :message, params[:message] %> </div> </div> </div> <div class="modal-footer"> <button type="submit" class="btn btn-primary"> <%= t('blacklight.sms.form.submit') %></button> </div> <% end %>

5) Add an email_search action to the controller

The partial form invokes a controller action (email) that doesn’t exist yet. We’ll add this next.

The Blacklight gem has a class saved_searches_controller.rb that holds the controller methods for saved_searches. It’s worth taking a look at this controller class in the gem (in lib/blacklight/catalog.rb). We’ll be basing our new controller method on the email_record action that already exists in this catalog controller (also in the gem).

In app/controllers/saved_searches_controller.rb (in your local instance), put:

require "#{Blacklight.root}/app/controllers/saved_searches_controller.rb" # -*- encoding : utf-8 -*- class SavedSearchesController < ApplicationController include Blacklight::Configurable # Email Action (this will render the appropriate view on GET requests and process the form and send the email on POST requests) def email @searches = current_user.searches if request.post? and validate_email_params email = SearchMailer.email_search(@searches, {:to => params[:to], :message => params[:message]}, url_options) email.deliver flash[:success] = I18n.t("blacklight.email.success") respond_to do |format| format.html { redirect_to catalog_path(params['id']) } format.js { render 'email_sent' } end and return end respond_to do |format| format.html format.js { render :layout => false } end end def validate_email_params case when params[:to].blank? flash[:error] = I18n.t('blacklight.email.errors.to.blank') when !params[:to].match(defined?(Devise) ? Devise.email_regexp : /^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}$/) flash[:error] = I18n.t('blacklight.email.errors.to.invalid', :to => params[:to]) end flash[:error].blank? end end

Here, the email action is grabbing the saved searches from the current_user object and storing them in an array.

@searches = current_user.searches

If the call to this method is POST, this means the form has been submitted, so the method will call a mailer method (email_search, which we still need to write) and pass the @searches array as a parameter.

6) Create the mailer method

Create a new file named search_mailer.rb in the app/mailers directory. This is similar to the record_mailer.rb file in the blacklight gem, adapted for a list of searches rather than bookmarks.

require "#{Blacklight.models_dir}/record_mailer.rb" # -*- encoding : utf-8 -*- # Only works for documents with a #to_marc right now. class RecordMailer < ActionMailer::Base def email_search(searches, details, url_gen_params) subject = I18n.t('blacklight.email_search_subject', :title => ("search results") ) @searches = searches @message = details[:message] @url_gen_params = url_gen_params mail(:from => "youremail@yourserver.edu", :to => details[:to], :subject => subject) end end

The subject text (blacklight.email.search_subject) doesn’t exist yet. You can see a full list in the gem under config/locales. We’ll add the new text required for our local app under blacklight.en.yml.

en: blacklight: application_name: 'Blacklight' email_search_subject: 'Your saved search history'

7) Create the mailer view

You will also need a view for this mailer to create the body of the email that will be sent. The view for document emails in the Blacklight gem is in app/views/record_mailer/email_record.html.erb.

We’ll create a similar view for the search history email.

In your local app, create a search_mailer directory in app/views, and create a new view named email_search.text.erb.  (In other words, create  app/views/record_mailer/email_search.text.erb.)

Here are your saved searches sent with the message: <%= @message %> <% @searches.each do |s| %> http://localhost:3000/?<%=(s.query_params).to_query%> <%end%>

Give it a try! You should now be able to email your saved searches through Blacklight.

8) Next steps

As you can see, the email view for search is hacky. You don’t want to hardcode localhost and you should probably exclude the action and controller name in the URL. You might also want to consider moving some of the headers and text to a configuration file. (Check out config/locales/blacklight.en.yml and blacklight.fr.yml in the gem for a starting point.)

Categories: CKM

Need a Cure? LTG Tech Clinics Announced!

Convergence - Mon, 2014-01-27 14:39

The Learning Technologies Group is thrilled to announce the launch of Tech Clinics, which consists of two separate, all-day, collaborative learning events held on the first and fourth Friday of every month. All Clinics will take place in CL-245 Multimedia Multipurpose Room in the UCSF Library Tech Commons. Below is additional information and registration instructions for each Clinic.

CLE Clinics

Whether you are a new UCSF CLE user looking for advice, or an experienced veteran in need of a check-up, LTG would like to invite you to a CLE Clinic. On the 4th Friday of each month (excluding holidays), we will be hosting a full-day, open training and collaboration session. LTG staff will be on-hand to prescribe a cure for all of the common CLE issues, and will also be offering short presentations on popular topics throughout the day (see schedule below). Feel free to join us for an individual session or for the whole day! This is a great opportunity for faculty and staff to collaborate and share ideas with other CLE users at UCSF.

  • 9am – Introduction to the CLE
  • 11am – Collaboration Roundtable
  • 2pm – Quiz activity demo and Q&A
  • 3pm – Gradebook activity demo and Q&A

The first CLE Clinic is scheduled for Friday, January 31, 9am-4pm. Click on the following link to register for an upcoming CLE Clinic.

Multimedia Clinic

Whether you are just starting out, or an experienced veteran seeking help with an advanced video project, the Learning Technologies Group would like to invite you to our new Multimedia Clinic. If you are working on a multimedia project at UCSF – this is the place to be! On the 2nd Friday of each month, LTG staff will be on-hand to prescribe the best tool for the job, including: Articulate for PowerPoint narration, Camtasia for screen recording, Final Cut Pro X and iMovie for video editing, MPEG StreamClip for video conversion, iBooks Author for mulit-touch book creation and more. If you have a question about any of the video cameras and audio recorders that we loan out, this is a great time get your questions answered.

The first Multimedia Clinic is scheduled for Friday, February 14, 9am-4pm. Click to register for an upcoming Multimedia Clinic.

If you have a laptop, we recommend that you bring it with you to both the CLE and Multimedia Clinics. Drop-ins are welcome, but please take a few minutes to register, so we can better prepare to meet your needs throughout the day.

Stay tuned for an LTG Clinic follow-up post, showcasing success stories from these action packed monthly events. Follow us on Twitter @UCSF_CLE, use the #LTGClinics hashtag and download the CLE Clinic flyer and share with a UCSF colleague.

As always contact the Learning Technologies Group with any questions or just to say hi!

Image Credits: LTG

Categories: Convergence

Medical History at UCSF: the Department of the History of Health Sciences, 1927-1998

Brought to Light Blog - Thu, 2014-01-23 10:15

The Archives and Special Collections contain both administrative and teaching files from the Department of the History of Health Sciences, especially between the years 1985-1998, before it became a Program in the interdisciplinary Department of Anthropology, History and Social Sciences. The unit was originally created in 1927, but became official on January 1, 1930 as Department of Medical History and Bibliography, supplied with a special seminar and rare book room in the new library. Fueled by the Oslerian cultural ideal, the medical classics were read and quoted since many educated physicians still could read Latin fluently. Chairing these seminars was Le Roy Crummer, a notable bibliophile and veteran collector of old books, together with Dean Langley Porter and professors Herbert Evans and Chauncey Leake. These activities were meant to convey to UC Regents that the campus provided a cultural environment that would preclude the removal of the Medical School to the Berkeley campus.

During the 1930s and 1940s, the Department flourished under the leadership of John B. de C. M. Saunders, a Professor of Anatomy and University Librarian. During these decades, its stewardship of archival materials and historical collections expanded, particularly with the acquisition of a collection of Oriental medicine titles. The name of the unit changed to History of Health Sciences in 1965 to accurately reflect the interests of the entire campus, and Dr. Saunders was appointed Regents Professor of Medical History, a post he occupied until his retirement in 1973. His long tenure featured the development of a graduate program of studies leading to an M.A. and Ph.D. degrees. His successor, Gert H. Brieger, then guided the Department from 1975 to 1984, when another change in name occurred to better illustrate its humanistic mission: History and Philosophy of Health Sciences.

Poster for the 1994 public lecture series at UCSF entitled “From House of Mercy to Biomedical Showcase: A Retrospective of Hospital Life.”

My appointment in 1985 allowed a resumption of the graduate program and the development of new elective courses for medical students, all supported by a library and audiovisual collection. With bioethics rapidly becoming an independent field, the designation History of Health Sciences returned. By this time, moreover, medical history was no longer the medicine’s inspirational handmaiden of its early days, but a scholarly enterprise designed to carefully reconstruct the medical past within its scientific, social, political, economic and cultural contexts. Such an outward glance, however, was complemented with an inward look at medicine itself, particularly the emotional demands of becoming and being a healer and establishing relationships with patients.

To implement such goals, the Department sponsored a program of noon-hour illustrated lectures, delivered at the Parnassus campus and open to faculty, students and staff during the 1990s. Among the most prominent themes presented with the use of slides and films were a history of the Western hospital from antiquity to AIDS and another of alternative healing traditions. In my opinion at the time, the old-fashioned lecture format was still the best way to convey the complex and contingent panorama of medicine’s impact on society. For medical students, our elective tutorials were designed to allow a guided exploration of the process of becoming a physician—emotional and technical– with the help of historical examples.

During more than half a century of its existence, many scholars played prominent roles in the Department’s development. Among them were faculty, students, health professionals, visiting lecturers and guest speakers, as well as patrons and donors who provided resources for the unit to flourish, allowing it to remain at the forefront of similar academic medico-historical institutions in the country and the world.

Guenter B. Risse MD, PhD is a historian of health and medicine. He was the chair of the Department of the History of Health Sciences at UCSF in 1985–2001. He now is Professor Emeritus, Department of Anthropology, History and Social Medicine at UCSF. His most recent book “Plague, Fear and Politics in San Francisco’s Chinatown”  was published in 2012 by Johns Hopkins University Press; it depicts the work of UCSF faculty during the epidemic.

Categories: Brought to Light

Eric Berne Rare Book Inventory Completed

Brought to Light Blog - Tue, 2014-01-21 10:00

The Eric L. Berne collection includes over 300 rare books from Berne’s personal library. Published between 1829 and 1984, these volumes illustrate Berne’s study of medicine, psychology, philosophy, folklore, and therapeutic techniques, as well as his published work. The researcher will find medical textbooks from Berne’s student days at McGill University in Montreal, Canada, practical manuals from psychiatric clinics and hospitals, popular “self-help” books of the 1950s and 1960s, and weighty tomes on psychoanalysis by major thinkers like Freud, Erikson, and Federn. Many books are underlined and annotated in Berne’s handwriting.

Cover of Berne’s medical school textbook “The Autonomic Functions and the Personality” by Dr. Edward J. Kempf, 1921

Berne’s annotations in “The Autonomic Functions and the Personality”

The collection also includes copies of Berne’s published works. His 1964 best-seller Games People Play was translated into nearly twenty different languages, and the Italian, German, Danish, Finnish, Swedish, Hebrew, Chinese, Norwegian, and Dutch editions are represented on the shelves. Working copies and first editions of The Mind in Action, A Layman’s Guide to Psychiatry and Psychoanalysis, The Structure and Dynamics of Organizations and Groups, Principles of Group Therapy, Transactional Analysis in Psychotherapy: A Systematic Individual and Social Psychiatry, and What Do You Say After You Say Hello? are available, as well as works by other contemporary and later practitioners of Transactional Analysis.

Cover of Dutch edition of Games People Play (Mens erger je niet)

The rare book collection will soon be searchable through the UCSF Library catalog, and is available to researchers in the Archives and Special Collections Reading Room.

Categories: Brought to Light
Syndicate content