Code4Lib 2021 Wednesday Lightning Talks and Closing
Apr 5, 2021 23:43 · 7321 words · 35 minute read
Thank you to our Captioning Sponsors: balsamiq and MIT Libraries hello everyone welcome back from breakouts and break to today’s lightning talks section lightning talks just as a quick recap or a segment where if you have an idea a project or a practice that you would like to share with the community this is your chance to do so all of the lightning talk chat and q&a will be in the same session listing within whova all uh eight of them that we have today a little different from the talk blocks that we’ve had previously where we moved from session to session if you’ve got a question for one of the presenters please enter their first or last name and then your question they’ll all be presenting live so they likely won’t get to address questions until their presentation is over and that will let them find the questions that are for them our first talk is going to be from hardy titled load testing with locust.
io or how to make your prod-like server cry take it away hardy all right i’m going to push all these buttons here and try to share screen all right can you guys see that all right all right i’ll start hi my name is hardy pottinger and i’m a publishing systems developer with california digital library and this talk is all about load testing with locust. io or how to make your production like server cry uh hold up i have to say this first it is not okay to make a person cry servers or apps can’t cry in order to load test you’ll have to get past seeing your app as a person load testing is something you need to do when you find yourself asking how fast can this thing go this is a stage of any web development project you work for weeks or months and are getting ready to deploy and somebody finally asks the question how fast can this thing go or maybe have we given this thing enough iron enough memory enough cpu are the specs sufficient for the load we expect oh man what kind of load do we expect this one i can’t help you with but if you want to know the upper limits of what your web app can do given the kinds of iron you fed it locust can help the download and docs are available at the main site i recommend reading the quick start it’s especially helpful for basic web app load testing but not everything you want the load test falls exactly into that category i’ll talk a bit about api testing on the next slide but for now remember that locust is written in python and it’s organized you define tasks which can fit in the sets of tasks which can be organized in the sequences here’s the process of working with locust first you design your test you figure out what question you’re trying to answer how fast how many users can it support imagine the activity you want to replicate related to the question you’re asking make a checklist of each activity and then pick one to focus on first in the case of an api it’ll be a get or a post on an end point you probably know which endpoint will be the slowest or at least the one that will see the most use finally identify your environment probably staging but anything production-like will work just not prod okay then encode your test i’m glossing over this a bit but you know it might look something like this it’s python there are lots of examples on the locust site run it locally at first ask your teammates to review the results screenshots or screen share whatever works and here’s the dashboard what the dashboard looks like notice at the top it tells you what host you’re targeting and how many users which you can change without stopping the test you can stop the test at any point with the big red stop button and you can reset the stats at any point and then revise and repeat until you’re sure your tests are testing what you need to test each new test can be added to a task that write as many TaskSet as you need then organize your task set into a sequence now keep in mind locust is going to create hundreds thousands any number of simultaneous connections so start small you might not want to include a user login in your initial testing plan unless that’s the thing you want to test okay now you’re ready to test your production like server i remember the first time i ran locust against an api we were working on i asked my teammates i don’t know what to put in here how many users i was thinking about our api as a person how hard should i push our api they said hard so i did and we kept increasing the numbers until we saw exactly how many simultaneous users the system could support before locking up from the graphs and logs we were also able to see exactly when the system started struggling which is a cool thing to know and it’s exciting when you realize what you found but don’t forget that the visualizations offered then look in the locust dashboard aren’t saved so if you see something you want for later screenshot it or print it to a pdf or if if you do forget just run the test over then look over your data and try again if you need to and report your results and have fun any questions ping me on slack or whova great thank you hardy that does look like a question for you did come in on whova our next talk is going to be jacob uh with a presentation titled beyond passwords a login server for distributed identification all you jacob it seems like we may have lost jacob accidentally to technical issues so let’s just move on to the next presentation after that which is going to be ben with git stat keeping track of local git repositories sync status if you’re ready for that then you can take it away yeah sure thanks um so hi i’m ben companjen and i work as a research software engineer slash digital scholarship librarian at leiden universities and there we go um so in my work i create software and i look at lots of other people’s software so i can help researchers do their research more efficiently often that means that i create or clone a git repository in the git folder in my home directory i i wouldn’t call myself a hoarder but i got quite a lot of them um so these folders contain either toy projects or projects i checked out to to work on to provide bug fixes for or production codes that is distributed via github or gitlab well git is as you may know a distributed version control system i develop on my laptop but i want to push the results my changes to a centrally hosted repository like github or gitlab and i say shoot because well i don’t always do that then my laptop that my laptop needed to get a new battery and there was a good chance that my drive would be white would be wiped so a slight bit of panic yes i do keep backups with a time machine but um yeah i wasn’t sure that that kept that yeah that includes everything that i want uh plus i just need to get it up to get it to the central hosted system um so i wanted to have a tool to see which local repositories have changed that have that are not online that have changes that are not online so i created git stats uh to help me with that uh gitstat is a shell script that goes through all the files or all the folders in my git root folder and for each folder it yeah it outputs the folder name the current git status in porcelain mode which is very readable for computers it has all the untracked files and even shows stash but right so i can see all the untracked and modified files then it adds all the branches that are that i have it lists all the remotes if there are any and then a demarcation so that i know this is where this ends and uh yeah this all goes in a status file uh this script is run every 10 minutes thanks to this plist um launch agent um it’s kind of like a systemd unit descriptor for mac os but those are details as i said everything goes into status.
txt and right i can see that uh this repository has um an untracked readme file and lots and a few others uh it’s just a huge file so what do i um yeah i need something to um to monitor whether this is good whether i’m getting to a better place um so i i need to summarize this um so that i can also see the numbers and see that they go down over time um i had already been playing with prometheus and graffana for monitoring so why not use that i know that prometheus can read text files if they’re formatted correctly so my first attempt was yeah to create a prometheus style text file that lists the number of lines and this is a bit of ugly grabbing and echoing but it works and i get a git dose prompt file that prometheus can read so from here i see that the total number of lines the number of repositories i can already see that there are two uh directories that are not git repositories so that’s kind of bad um but yeah to show that i can indeed see these files in grafana i haven’t done anything in yeah since in in the past couple of hours but eventually this should go okay it looks like we had some more technical issues pop up there losing audio but we were also just about out of time i hope everything is okay with ben let’s move on to the next talk which is going to be lynette with prioritizing and organizing user stories around accessing authoritative data yes so this one is a video i’m lynette rail a developer at cornell university working on the link data for production grant i facilitate the best practices for authoritative working group i’m going to talk about the outputs of the first charter of the working group a core principle of the working group is to include members from different parts of the authoritative data pipeline the group includes authoritative data providers to improve understanding of how their data is being used it includes data consumers who represent the cataloging community and other uses of authoritative data in libraries it also includes developers that access authoritative data apis and create applications and tools that make that data available to consumers as part of the workflow the first charter focused on a common understanding of needs by defining user stories from the perspective of each of these roles defining 32 catalog user stories 44 application developer user stories both from the ui and backend perspective and 38 provider user stories once the user stories were defined we engaged the broader community in helping us to prioritize the cataloger user stories we reached out to the pcc community with a survey that allowed respondents to put their stories in buckets ranking their importance based on their own work from these we created four prioritization categories based on the ranked level of importance the labels in the graph are highly abbreviated but at the end of the presentation there is a link to the full analysis of the survey if you want to see the details of the user stories to give you a sense of the user stories these are the top five in the priority graph include extended context that is additional information about each search result that helps the user make a more accurate selection filtering search results by class type for example searching names limit to person names filtering out organizations and other types of names when searching for an exact match the user wants to know if that exact match doesn’t exist when editing a resource like a work that you want to connect to authoritative data like a subject the search needs to include the uri for each result this allows the resource to link to the exact result that the user selects and for authorities that are hierarchical include the broader and narrower terms for each search result with the full set of user stories defined and the results of the survey we created a document that organizes the user stories the primary focus is on cataloger user stories developer and provider user stories are listed with the cataloger user stories they support related user stories are gathered together user stories related to searching are together that is search results for an exact match left anchor search keyword search user stories related to refining results through filtering are together and user stories related to accurate selection like additional context relevancy rating those are together and each section also includes the priority that was assigned based on the survey so this is an example from the document of the section that gathers user stories related to performance within this section it further breaks into directly related user stories that impact performance for example the first subsection under performance is search results return quickly since it rated as a level one in the survey it is marked as having priority level one there’s a fuller definition of what we mean by search results returned quickly then the cataloger user stories are listed followed by supporting developer and provider user stories the next subsection which is time out gracefully again with a priority definition and user stories this is the basic pattern that we follow in the document first charter is complete we will soon be starting the second charter and we’ll be focusing on change management for authoritative data we’ll work to produce documentation on common types of changes in authoritative data create specifications on how best to represent those changes and make some recommendations for tooling that providers can use to create change documents and cache maintainers can use to consume those documents we’re in the process of identifying members for the second charter so please do express your interest if you would like to help move this work forward i’ve included links to the output of the first charter and to the main page of the second charter feel free to reach out to me if you want to learn more about this work or become part of the working group great excellent presentation lynette thank you and our next talk is going to be from anna titled mining labor records and collections as data at the university of utah anna all you okay is everyone seeing my screen okay all right um so i’m anna neatrour i’m interim head of digital library services and digital initiatives librarian at the university of utah and i’m going to talk a little bit about a collections data project that we have completed recently at the university of utah i’m really presenting on behalf of a team and so that’s all of us me and jeremy and rachel and our contact information is right there so when you’re talking about uh collections as data um a basic definition of it is that you’re trying to encourage the computational use of digitized and born digital collections at the university of utah we were also really interested in developing historical data sets based on our digital library materials because we have a newish digital scholarship center that’s housed in the library and not a lot of utah-based historical data sets that might be interesting for students and faculty to work with in a digital humanities or digital scholarship complex context and also collections as data is just kind of cool in utah we have the kennecott copper mine it is the largest man-made excavation and deepest open pit mine in the world there’s a photo from special collections of a miner working in the mine we have uh records from the copper mine company so over forty thousand records from approximately 1900 to 1919 and it contains detailed information about the miners so we were able to get all of these digitized for free uh through a partnership with family search but they weren’t going to index it for us so we needed to come up with a way to make this information in the records searchable and the collection that we were working with is just a small fraction of a much larger uh kennecott copper mine collection that we have in our special collections here’s two extremes of um sort of the difficulties in some of these mining cards we have one with incredibly faint text and then we have one with almost too much text as every single job for this minor repeatedly got annotated and added to on the same card we did some initial trials and we decided to focus on this area highlighted in yellow that contains demographic information about the miners because it was more structured and also more likely to be present on most of the cards there’s a lot of interesting employment information on these cards but it’s also very inconsistent and we decided to leave that for a later date there’s a little bit of tension between dublin core and standardized practices and what you might want to do if you’re developing a collections as data project so we had to create a very customized uh template for this project and introduce a lot of non-standard fields into our digital library repository which i usually don’t like when other people do it but then when i do it i feel like i’m justified and i think the results of this project were really interesting so we we do have things like eye color weight and height now as fields in our digital library repository so before the pandemic we had around seven thousands of these records transcribed and i really thought that this would be a project that we just kind of keep in the background um for folks to work on and maybe like in five to ten years we’d have all of them transcribed but when the work from home orders uh went out across our university we like many places decided to spin up additional transcription projects and so our students who would normally normally be working on scanning projects for us pivoted to transcription instead and they transcribed these directly in our metadata management tool and the records were then reviewed before we added them to our digital library once all the records were transcribed we’re able to explore these a little bit and so here’s some visualizations that my colleague rachel made in tableau so we can see uh where which different countries um the miners were coming from and so that the u.
s would be the top country represented isn’t really a surprise but followed closely by greece and japan we can also see employment by nationality by year and kind of see different trends in the data based on world war one based on things happening in the countries of origin for the miners we were talking with demographers about the project and they were super jazzed about being able to calculate bmi for minors um this this doesn’t look very dramatic to me but i’m just putting this screen up there uh just in case anyone’s interested um and a little bit a couple factoids the youngest miner was 15 years old and the oldest in that uh data set was 77 years old so this really encouraged us to develop new workflows think about our digital collections in a new way and really think beyond our digital library repository in terms of um thinking about what we can do to make collections accessible to people and we have a article available open access that talks about this a little bit and our other collections as data projects we have a github site where we have all of our collections is data materials as well as a separate uh github repository just for the kennecott miner records um and i’m really curious to hear more from the code4lib community about automated methods of dealing with transcription work that might help us in the next phase of our project thank you very much thank you anna our next presenter is going to be erin with a presentation designing for the most or a bellwether speaks all you erin hello okay so can everybody see my smiling bitmoji mug bitmoji is clear fantastic so hi folks just a visit from your future here i’m the ram with the bell around its neck i’m erin white erin r white on twitter this is my 11th code4lib i’m head of digital engagement at vcu libraries in richmond virginia i’m also the interim digital collections librarian for the past five years or so shout out to everyone who’s holding an interim appointment or who has absorbed vacancy in your area i know many of y’all have been doing this math too the past year in particular brought so much hardship across all vectors of our lives and at work that likely included layoffs retirements and other departures i’m in a relatively good position i get to say how much of this work has to get done and it turns out half-assing a job for a quarter of my time means projects move really slowly or not at all i’m sharing this with you not to complain and it’s also not an indictment of my library i share it because i think this is where we’re headed the early aughts were a boom time for mass digitization and library investment in digital collections it’s a time of huge growth and excitement in digital libraries but y’all library budgets are not getting bigger anytime soon it’s not that we’re in temporarily tough times i think this is just how things are and will be it sure seems to me that digital collections work and other types of important and often invisibleized work in the library will continue to be deprioritized when budget conversations inevitably get tough i won’t tell you not to hope and fight for the absolute best but i will tell you to plan for the worst or rather to plan for the most because this is where most most of us are headed and it’s not necessarily the worst it’s just way different there are a lot of ripple effects of disinvestment that i could talk about but i only have a few minutes so i’ll talk about the ones that haunt me the most at code4lib 2014 sumana harihareswara gave a keynote that i’m still thinking about seven years later she talked about the last mile problem the largest hurdle we face in making things usable she gave many good examples and even wrote it up into a code4lib journal article so the bottom line is that many people don’t use services even ones that are quote best for them because they’re simply not usable here’s a picture of the most beautiful bus stop in richmond virginia it’s my bus stop it’s not my house though while this bus stop has the loveliest views it has zero amenities it’s inaccessible for many of my neighbors it only works well for me because i have a smartphone and i can walk quickly and dodge traffic if any of those things were to go away or if the weather goes south i can’t use this service easily this example is the very literal definition of the last mile problem one of the ways the last mile problem has manifested in my work life has been that even after a year and a half of using islandora for our digital collections we still haven’t figured out a workflow to batch upload collections we have added only one item to our digital collections since fall 2019.
first of all as i said a few slides back this is a result of disinvestment we’ve had a vacant position for years this is also a documentation problem to get our process sorted we’ve been hanging on every word of this seven-year-old blog post that’s only accessible through the wayback machine shout out to the wayback machine this is also fundamentally a last mile problem this process assumes scripting experience and staff time to troubleshoot each bulk upload i’m actually ashamed to admit this i feel this failure in my body i know that if i carved out two solid days i could probably get something working right it seems so fundamental it should be simple if i just tried harder if i just had more time but this isn’t about me and this isn’t really about islandora either and i know a lot of this is fixed in version 8.
again this isn’t about islandora this is about beautiful bus stops that only a few people in good circumstances can use we can and must design more usable things for each other so i ask you to think of this how can we adjust the angle of our vision to set our sights on each other instead of the distant horizon of another cutting-edge revolutionary technology that will solve all our problems the object what if instead of thinking of this as planning for the worst we instead seeing it as planning for the most because most of us are pressed for time for money for the brain cells to rub together to create new workflows by considering institutions that have fewer resources we actually end up designing for everybody because the center is not holding the dividing line between have and have not institutions is only getting stronger with fewer institutions in between as cultural heritage organizations we’ll continue to become interdependent on each other as time goes along consortial collectively held platforms and communities are the way we need to go code4lib itself is a model of how this can work we can make this work so consider this an invitation let’s keep building the future we need together and i hope you’ll read this open access version of design justice because it really got me thinking in this direction so thank you thank you a ton erin our next presenter is going to be ash with the presentation using xpath 3.
1 and its friends to work with json all you ash hello let me share my screen sharing and y’all can see it uh i’m going to hope that’s a yes awesome thank you uh so before i get started um my script for this talk is available in a github repository um i’ll link it on the next slide and i’ve also posted the link in whova and in slack so my name is ash clark my pronouns are e / em / er my job is xml applications developer for the northeastern university digital scholarship group and for the and for the northeastern women writers project so what i do is i turn text encoding initiative xml documents into apis and into web interfaces and to support these i write rest xq applications which are served out of xml databases um these databases are exist db and basex i work heavily with xquery a scripting language that extends the functionality of xpath i also work with xslt and css and reluctantly javascript lots of people prefer to get json responses from apis uh whoops there we go i like xml better obviously but i want my apis to support both especially because i’m working in the digi the digital humanities or dh there’s no one tool or programming language that everybody in the field is using and each api could be someone’s first so i warmed to json when i realized there was actually a lot of overlap between json and xml there’s one root there’s hierarchy there’s meaningful groups and they’re both common formats for data interchange but i will never recommend json for marking up documents use xml or html please okay so enter expat then xquery 3.
1 and xslt 3. 0 these are w3c recommendations that were published in 2017. the functionality is available in text and processors and the existdb and basex databases and i’m dropping a lot of names here there i’ve linked to as many as i can in my script um so crucially uh there we go crucially uh the new versions integrate support for working with json so they introduce new complex data structures they introduce maps which are key value pairs and arrays which are ordered groupings of values both of these can nest unlike the standard sequences and both can accept any type of value including xml nodes and including inline functions which are also new to these versions i’ve been working for maps with maps and arrays for most of my time at northeastern i love them so much so um so here’s an example of both um this is a map with two keys greeting and who the value of greeting is a string hello and a value of who is a three item array consisting of a string an xml element node and whatever’s in the variable attendees from northeastern so you can access data in maps and arrays with functions or with terse lookup operators so these are two expressions that test alternate array alternate ways of getting at values in my map and my array both of them should return true since the x the accessors are equivalent um you can then qualify you can also qualify the results you expect uh just as you can with xpath expressions so in the first expression we want to return the variable my map but only if the key greeting has the string value of hello in the second we want to get my array but only if one of its values is this is the string world excitingly the new xpath xquery and xslt can also convert json strings or files into maps and arrays they can also serialize maps and arrays back into json and and they can convert json to xml and vice versa with the caveat that the equivalent xml has to follow a w3c defined schema so some takeaways from my use of maps and arrays key value pairs are amazing great for mapping parameter values onto human readable labels i use this often a the nesting of arrays means that i can do complex ordering of values so for example i’ve done this recently i have an outer array i can have an outer array that corresponds to a sort order and inner ones which correspond to a secondary sort order i can also cache the xml representation of json in my xml database which gives me indexing and quick query results i can create restxq apis which return xml and json on request and uh html if i if i’m applying xslt uh i can also pass maps around instead of creating x query functions that create that use uh oodles of required parameters um so i don’t have a lot of i don’t have enough time to give detailed ex examples of my own work but feel free to start up a conversation in a code4lib slack i’m super enthusiastic about these new features come geek out with me all right thank you thank you ash all right we have two more talks for this section the next one is going to be from michael titled heading overlays change the subject and ontology aware cross-reference resolution michael all you when you’re ready hi uh so this is kind of a follow up on a presentation or um like panel discussion that i participated in for the discovery days presentation and when i spoke at that time this work was very much in active development and it has since been released so i’m going to be talking a little bit about replacing problematic subject headings the example that we were using for development and the example that i’m going to be talking about now is illegal aliens which i know has been has been covered quite a bit um there’s been a lot of great work and discussion around this and around the sort of policy implications and and stuff um for which i would refer people to uh to change the subject documentary and there was a great breakout session discussion uh technology for reparative description that um that happened moments ago but i’m going to talk a little bit about the the technology side of this um so we’ve focused on working with technology to be able to provide robust support for policy decisions um the the goal to be being to um to separate the technological implementation from from the policy decisions and basically put us in a position where the technological options don’t circumscribe the decisions that people are looking to make policy wise um so the goal as the goal has been similar to other institutions that i’ve heard speaking about this kind of thing we’re looking to maintain parity with search results for existing uh terminology so if a user comes and searches or browses for either illegal aliens or undocumented immigrants which is our the the current terminology the current label that we’re looking to um to replace in our user interface uh or looking to use as a replacement in our user interface they should see similar results like the same results ideally ditto for subject facets and for for browse displays so i’m going to share my screen hope the five minutes aren’t going faster than i had planned for them to go but time doesn’t discuss these things with me um so this is the public interface to the university of pennsylvania’s catalog so i’m going to do a subject heading browse for illegal aliens and you’ll see that we have the browse index comes up with the term that we searched for because we explicitly entered that term but it refers us to this preferred form of of the term or preferred representation a preferred label for the term and if we search for undocumented immigrants you’ll see that this behaves more or less as we would expect so we’re leaving a stub in place and generating a cross-reference so that it’s transparent to the user what’s happening we want to be clear about about what we’re doing and to further illustrate that i’m also going to click through here um because part of what was tricky about this was getting the um getting the subject to display properly here so subject displays properly here notable contrast here we are not actually replacing the heading in the underlying metadata um part of this is for transparency reasons we wanted to replace it in the places where users interact with it most but one of the real challenges with this has been to try to address these issues while not throwing the baby out with the bathwater basically like um shared ontologies are useful uh for reasons that have nothing to do with whether we agree or strongly disagree with certain decisions that they make it’s a common point of reference it allows us to interoperate with other institutions and allows users to know what to expect so by keeping this in place here we’re avoiding on the sort of library technology development side we’re avoiding a situation where a cataloger might make a decision intentionally and we decide to in our infinite wisdom override that decision in a way that is completely opaque to everybody catalogers included um so i’m gonna there’s a couple more things that i want to touch on here uh one is that this has tied into discussions around alternate vocabularies which i know people are discussing particularly with respect to this issue of you know problematic subject headings and lcsh and its its shortcomings um but i would add that that the same issues really apply like technologically speaking and in terms of user interface the same issues apply to other sorts of alternate vocabularies so like alternate thesauri art and architecture thesaurus whatever like there’s a there’s a proliferation of different ontologies that might be very well suited to particular um to particular materials but there’s a um like a sort of trade-off that that um catalogers and and metadata creators have to make um between the ubiquity of something like library of congress subject headings where um they’re going to be where the majority of people are looking and expecting to see things and specificity so maybe or or um another term uh another ontology that that maybe is preferable for some other reason and so part of what we’re trying to do with the this uh cross-reference based approach is uh to be able to accommodate different ontologies in a in a very general way and in a framework that is both transparent but also allows to index things in multiple different ways and provide different avenues for for discovery without um causing like a a proliferation in the user interface that that sort of grows out of control and and is difficult to understand um how am i on time we are just a hair over the five minutes essentially so we should probably move on to the next person uh thanks i’m available and uh would be happy to discuss further thank you michael and an earlier presentation that we had technical issues with jacob has managed to get back online okay and has their presentation beyond passwords a login server for distributed identification jacob all you yeah thanks and sir for the drop out i’d like to share a small piece of software that we wrote as a service for one of our um web applications and then it evolved for more web applications so it yeah just called login server did not find a better name and it’s used for a single sign-on so the motivation was i don’t want to store any passwords anymore i don’t want to send passwords to people or help them to log in but the identity of people should be yeah they already have um accounts at other services so github we provide github orcid stack exchange data and or internal ldap um and so yeah you can you log in via github i get redirected to and back because i’m already logged in in github and then um with my account i can see the identities i have been logged in with orcid and wikidata and so on too so these are all connected and if i want to just give people the access if they have an orchid i can verify that as they have actually the real orcid as promised yeah or the same way with wikidata we also wanted right access for one application for this reasons i have to confirm okay uh it’s not just the identity but also a specific right um and so i get back uh locked back to this but um normally uh people don’t see this interface here or of the the server but an app web application and one of these web applications that uses um the log in server spartak we just took over management of this and here it’s the same if i if i log out then i’m also locked out in bartoc and then i can log back in via any of this um identity providers and so in bartoc if i edit a record then my orcid id or another kind of id is stored and we are really sure that it is the same person okay this is just how it looks like um but it’s um the the code is um of course open source so this log-in server is at github and documented how you install have to can install it you you need some sl certificate for because everything is encrypted um and to reduce register each of the identity providers yeah and we also provide a javascript client um library for easier to interaction with the login server but the api is not that complicated here you can see you can query what applications are allowed to authenticate or get identity via the service and then the application gets the identity of the person that looks that’s locked in um yeah and this uh here is the one example of how to use the api and another one yeah i just uh you could just look up in the source code of bartoc for instance or one of our other applications so in bartoc it’s basically one view component to show yeah locked in then the name if not logged in then click on possibility to click then we import this log in client library and the rest is um yeah okay there’s a javascript code and view you don’t um have to read all of this but i just wanted to show it just 80 lines and so it’s not that complicated uh to use yeah and maybe you don’t have the same um requirements like our uh we have but it still may be inspiring uh to to use this or the same technology so um it’s not all ours so this is just as a whole in service more thin layer above the passport uh passport library so this is an existing javascript library for for a single sign on and we in yeah we just adapted it to to easily um plug in different identity providers yeah that’s it i hope it is of use wonderful thank you jacob and with that that does wrap up our lightning talks section for today and that is day three of code4lib a couple of closing announcements before we head out uh just be aware that if you’re sad that you can’t go to a karaoke bar with your fellow attendees don’t worry we have karaoke night tonight starting at 6 p.
m eastern or 3 p. m pacific where you can sing your heart out with other code4libbers over the internet it will be a fun time if you have a song that you do want to sing though please use the google form posted within the virtual meets thread on the whova community board so that we can also queue up your song in the playlist in advance for smooth singing don’t forget also tomorrow we have our trivia nights in the evening space is limited for that event so if you can please register in advance through the zoom registration link in the virtual meets thread on the whova community board in addition the community support volunteers for this evening’s karaoke night are and i do apologize for how bad i’m about to butcher these uh michelle janowiecki or m janowiecki on slack and mike giarlo or mj giarlo on slack with that thank you everybody for coming today and we’ll see you all tomorrow.