Thursday, June 29, 2017

2017-06-29: Joint Conference on Digital Libraries (JCDL) 2017 Trip Report

The 2017 Joint Conference on Digital Libraries (JCDL) took place at the University of Toronto, Canada. From June 19-23, we (WS-DL) attended workshops, tutorials, panels, and a doctoral consortium. The theme of this year's conference was #TOscale, #TOanalyze, and #TOdiscover. The conference provided researchers from disciplines such as Digital Library research and information science, with the opportunity to communicate the findings of their respective research areas.
Day 1 (June 19)
The first day (pre-conference) of the conference kicked off with a Doctoral Consortium and a Tutorial - Introduction to Digital Libraries. These events took place in parallel with a Workshop - 6th International Workshop on Mining Scientific Publications (WOSP 2017). The final event of the day was a tutorial titled, "Scholarly Data Mining: Making Sense of the Scientific Literature"

Day 2 (June 20)
The conference official started on the second day with opening remarks from Ian Milligan, shortly followed by a keynote from Liz Lyon in which she presented a retrospective on data management, highlighting the successes and achievements of the last decade, as well as assessing the the current state of data, and providing insight into the research, policies and practices needed to sustain progress.
Following Liz Lyon's keynote, Dr. Justin Brunelle opened the Web archives paper session with a presentation for a full paper titled, "Archival Crawlers and JavaScript: Discover More Stuff but Crawl More Slowly." In this presentation, he discussed the challenges Web archives face in crawling pages with deferred representations due to JavaScript, and proposed a method for discovering and archiving deferred representations and their respective descendants which are only visible from the client.
Next, Faryaneh Poursardar presented a short paper - "What is Part of that Resource? User Expectations for Personal Archiving," where she talked about the difficulty users face in deciding the answer to the question: What is part of and what is not part of an Internet resource? She also explored various user perception of this question and its implications on personal archiving.
Next, Dr. Weijia Xu presented a short paper - "A Portable Strategy for Preserving Web Applications and Data Functionality". Dr. Xu proposed a preservation strategy for decoupling web applications and from data and the hosting environment in order to improve reproducibility and portability of the applications across different platforms over time.
Sawood Alam was scheduled to present his short paper titled: "Client-side Reconstruction of Composite Mementos Using ServiceWorker," but his flight was cancelled the previous day, delaying his arrival until after the paper session. 
Dr. Nelson presented the paper on his behalf, and discussed the use of ServiceWorker (SW) web API to help archival replay systems avoid the problem of incorrect URI references due to URL rewriting, by strategically rerouting HTTP requests from embedded resources instead of rewriting URLs.
The conference continued with the second paper session (Semantics and Linking) after a break. This session consisted of a pair of full paper presentations followed by a pair of short paper presentations.
First, Pavlos Fafalios presented - "Building and Querying Semantic Layers for Web Archives," which was also a Vannevar Bush Best Paper Nominee. Pavlos Fafalios proposed a means to improve the use of web archives. He highlighted the lack of efficient and meaningful methods for exploring web archives, and proposed an RDF/S model and distributed framework that describes semantic information about the content of web archives.
Second, Abhik Jana presented "WikiM: Metapaths based Wikification of Scientific Abstracts" - a method of wikifying scientific publication abstracts - in order to effectively help readers decide whether to read the full articles. 
Third, Dr. Jian Wu presented "HESDK: A Hybrid Approach to Extracting Scientific Domain Knowledge Entities." Dr. Jian Wu presented a variant of automatic keyphrase extraction called Scientific Domain Knowledge Entity (SDKE) extraction. Unlike keyphrases (important noun phrases of a document), SDKEs refer to a span of text which represents a concept which can be classified as a process, material, task, dataset etc.
Fourth, Xiao Yang presented "Smart Library: Identifying Books in a Library using Richly Supervised Deep Scene Text" - a library inventory building/retrieval system based on scene text reading methods, which has the potential of reducing the manual labor required to manage book inventories.
The third paper session (Collection Access and Indexing) began with Martin Toepfer's presentation of his full paper (Vannevar Bush Best Paper Nominee) titled: "Descriptor-invariant Fusion Architectures for Automatic Subject Indexing: Analysis and Empirical Results on Short Texts." He discussed the need for digital libraries to automatically index documents accurately especially considering concept drift and amid a rapid increase in content such as scientific publication. Martin Toepfer also discussed the approaches for automatically indexing as a means to help researchers and practitioners in digital libraries decide the appropriate methods for automatic indexing. Next, Guillaume Chiron, presented his short paper titled: "Impact of OCR errors on the use of digital libraries. Towards a better access to information." He discussed his research to estimate the impact of OCR errors on the use of the Gallica Digital Library from the French National Library, and proposed a means for predicting the relative mismatch between queried terms and the target resources due to OCR errors.
Next,  Dr. Kevin Page presented a short paper titled: "Information-Seeking in Large-Scale Digital Libraries: Strategies for Scholarly Workset Creation." He discussed his research which examined the information-seeking models ('worksets') proposed by the HathiTrust Research Center for research into the 15 million volumes of HathiTrust content. This research also involved assessing whether the information-seeking models effectively capture emergent user activities of scholarly investigation.
Next, Dr. Peter Darch presented a short paper titles: "Uncertainty About the Long-Term: Digital Libraries, Astronomy Data, and Open Source Software." Dr. Darch talked about the uncertainty Digital Library developers experience when designing and implementing Digital libraries by presenting the case study of building the Large Synoptic Survey Telescope (LSST) Digital Library.
The third paper session was concluded with a short paper presentation from Jaimie Murdock titled: "Towards Publishing Secure Capsule-based Analysis," in which he discussed recent advancements in providing aid to HTDL (HathiTrust Digital Library) researchers who intend to publish there results from Big Data analysis from HTDL. The advancements include provenance, workflows, worksets, and non-consumptive exports.
After the Day 2 paper sessions, Dr. Nelson conducted the JCDL plenary community meeting in which attendees where given the opportunity to give feedback to improve the conference. The plenary community meeting was followed by Minute Madness - a session in which authors of posters had one minute to convince the audience to visit their poster stands.
The Minute Madness gave way to the poster session and a reception followed. 
Day 3 (June 21)
Day 3 started with a keynote from Dr. Raymond Siemens, in which he discussed the ways social scholarship framing of the production, accumulation, organization, retrieval, and navigation of knowledge, encourages building knowledge to scale in a Humanistic context.
Following the keynote, the fourth paper session (Citation Analysis) began with a prerecorded full paper (Vannevar Bush Best Paper Nominee) presentation from Dr. Saeed-Ul Hassan titled: "Identifying Important Citations using Contextual Information from Full Text," in which he addressed the problem of classifying cited work into important and non-important classes with respect to the developments presented in a research publication, as an important step for algorithms designed to track emerging research topics. Next, Luca Weihs presented a full paper titled: "Learning to Predict Citation-Based Impact Measures." He presented non-linear probabilistic techniques for predicting the future scientific impact of impact of a research paper. This is unlike linear probabilistic methods which focus on understanding the past and present impact of a paper. The final full paper presentation from this session was titled: "Understanding the Impact of Early Citers on Long-Term Scientific Impact" and presented by Mayank Singh. Mayank Singh presented his investigation to see if the set of authors who cite a paper early (within 1-2 years), affect the paper's Long-Term Scientific Impact (LTSI). In his research he discovered that influential early citers negatively affect LTSI probably due to "attention stealing."
The conference continued with fifth paper session (Exploring and Analyzing Collections) consisting of three full paper presentations. The first (Student Paper Award Nominee), titled: "Matrix-based News Aggregation: Exploring Different News Perspectives," was presented by Norman Meuschke. He presented NewsBird, Matrix-based News Analysis system (MNA) which help users see news from various perspectives, as a means to help avoid a biased news consumption.
The second paper (Vannevar Bush Best Paper Nominee), titled: "Quill: A Framework for Constructing Negotiated Texts - with a Case Study on the US Constitutional Convention of 1787," was presented by Dr. Nicholas Cole, who presented the Quill framework. Quill is a new approach to present and study formal negotiation records such as creation of constitutions, treaties, and legislation. Quill currently hosts the records of the Constitutional Convention of 1787 that wrote the Constitution of the United States.
The final presentation for this session was from Dr. Kevin Page, titled: "Realising a Layered Digital Library: Exploration and Analysis of the Live Music Archive through Linked Data," in which he discussed his research which followed a Linked Data approach to build a layered Digital Library, utilizing content form the Internet Archive's Live Music Archive.
The sixth paper session (Text Extraction and Analysis) consisted of three full paper presentation. The first, titled: "A Benchmark and Evaluation for Text Extraction," was presented by Dr. Hannah Bast. Dr. Bast highlighted the difficulty of extracting text from PDF documents due to the fact that PDF is a layout-based format which specifies position information of characters rather than semantic information (e.g., body text or footnote). She also presented her evaluation result of 13 state of the art tools for extracting text from PDF. She showed that her method Icecite outperformed other tools, but is not perfect, and outlined the steps necessary to make text extraction from PDF a solved problem. Next, Kresimir Duretec presented "A text extraction software benchmark based on a synthesized dataset." To help text data processing workflows in digital libraries, he described a dataset generation method based on model driven engineering principles and use it to synthesize a dataset and its ground truth directly from a model. He also presented a benchmark for text extraction tools. This paper session was concluded with a presentation by Tokinori Suzuki titled: "Mathematical Document Categorization with Structure of Mathematical Expressions." He presented his research in Mathematical Document Categorization (MDC) - a task of classifying mathematical documents into mathematical categories such as Probability theory and Set theory. He proposed a classification method that uses text and structures of mathematical expressions. 
The seventh paper session (Collection Building) consisted of three full paper presentation, and began with Dr. Federico Nanni's presentation (Best Student Paper Award Nominee) titled: "Building Entity-Centric Event Collections." Federico Nanni presented an approach that utilizes large web archives to build event-centric sub-collections consisting of core documents related to the events as well as documents associated with the premise and consequences of events.
Next, Jan R. Benetka, presented a paper titled: "Towards Building a Knowledge Base of Monetary Transactions from a News Collection," where he  addressed the problem of extracting structured representations of economic events (e.g., large company buyouts) from a large corpus of news articles. He presented a method which combines natural language processing and machine learning techniques to address this task.
I concluded the seventh paper session with a presentation titled: "Local Memory Project: providing tools to build collections of stories for local events from local sources". In this presentation, I discussed the need to expose local media sources, and introduced two tools under the umbrella of the Local Memory Project. The first tool - Geo, helps users discover nearby local news media sources such as newspapers, TV, and radio stations. The next - a Collection building tool, helps users build, save, share, and archive collections of local events from local sources for US and non-US media sources.
Here are the slides I presented:
The eighth paper session (Classification and Clustering) occurred in parallel with the sixth paper session. It consisted of a pair of full papers and a pair of short papers. The first paper titled: "Classifying Short Unstructured Data using the Apache Spark Platform," was presented by Saurabh Chakravarty. Saurabh Chakravarty highlighted the difficulty traditional classifiers have in classifying tweets. This difficulty is partly due to the shortness of tweets, and the presence of abbreviations, hashtags, emojis, and non-standard usage of written language. Consequently, he proposed the used of the Spark platform to implement two shot text classification strategies. He also showed these strategies are able to effectively classify millions of text composed of thousands of distinct features and classes. Next, Abel Elekes presented his full paper (Best Student Paper Award Nominee) titled: "On the Various Semantics of Similarity in Word Embedding Models," in which he discussed results running two experiments to determine when exactly similarity scores of word embedding model is meaningful. He proposed that his method could provide a better understanding of the notion of similarity in embedding models and improve the the evaluation of such models. Next, Mirco Kocher presented his short paper titled: "Author Clustering Using Spatium." Mirco Kocher proposed a model for clustering authors after presenting the author clustering problem as it relates to authorship attribution questions. The model he proposed uses a distance measure called Spatium which was derived from weighted version of L1 norm (Canberra measure). He showed that this model evaluation produced high precision and F1 values when tested with a 20 test collection. Finally Shaobin Xu presented a short paper titled: "Retrieving and Combining Repeated Passages to Improve OCR." He presented a new method to improve the output of Optical Character Recognition (OCR) systems. The method begins with detecting duplicate passages, then it performs a consensus decoding which is combined with a language model.
The ninth paper session (Content Provenance and Reuse), began with Dr. David Bamman full paper presentation titled: "Estimating the Date of First Publication in a Large-Scale Digital Library." Dr. David Bamman discussed his finding from evaluating methods for approximating date of first publication. The methods considered (and used in practice) include: using the date of publication from available metadata, multiple deduplication methods, and automatically predicting the date of composition from text of the book. He found that using a simple heuristic of metadata-based deduplication performs best in practice.
Dr. George Buchanan presented his full paper titled: "The Lowest form of Flattery: Characterising Text Re-use and Plagiarism Patterns in a Digital Library Corpus," in which he discussed a first assessment of text re-use (plagiarism) for the digital libraries domain, and suggested measures for more rigorous plagiarism detection and management.
Next, Corinna Breitinger presented her short paper titled: "CryptSubmit: Introducing Securely Timestamped Manuscript Submission and Peer Review Feedback using the Blockchain." She introduced CryptSubmit as a means to address the fear researchers have that their work may be leaked or plagiarized by a program committee or anonymous peer reviewers. CryptSubmit utilizes the decentralized Bitcoin blockchain to establish trust and verifiability by creating a publicly verifiable and tamper-proof timestamp for manuscript.
Next, Mayank Singh a short paper titled: "Citation sentence reuse behavior of scientists: A case study on massive bibliographic text dataset of computer science." He proposed a new model of conceptualizing plagiarism in scholarly research based on reuse of explicit citation sentences in scientific research articles, which is unlike traditional plagiarism detection which uses text similarity. He provided examples of plagiarism and revealed that this practice is widespread even for well known researchers.
A conference banquet at Sassafraz Restaurant followed the last paper session of the day.
During the banquet, awards for best poster, best student paper, and the Vannevar Bush best paper award, were given.  Sawood Alam received the most votes for his poster - Impact of URI Canonicalization on Memento Count - thus, received the award for best poster. Felix Hamborg, Norman Meuschke, and Dr. Bella Gipp, received the best student paper award for: "Matrix-based News Aggregation: Exploring Different News Perspectives." Finally, Dr. Nicholas Cole, Alfie Abdul-Rahman, and Grace Mallon received the Vannevar Bush best paper award for "Quill: A Framework for Constructing Negotiated Texts - with a Case Study on the US Constitutional Convention of 1787."
Day 4 (June 22)
Day four of the conference began with a panel session titled: "Can We Really Show This?: Ethics, Representation and Social Justice in Sensitive Digital Space," in which ethical issues experienced by curators who work with sensitive and contentious content from marginalized populations was addressed. The panel consisted of Deborah Maron (Moderator), and the following speakers: Dorothy Berry, Raegan Swanson, and Erin White.
The tenth and last paper session (Scientific Collections and Libraries) followed and consisted of three full paper presentations. First, Dr. Abdussalam Alawini, presented a paper titled: "Automating data citation: the eagle-i experience," in which he highlighted the growing concern of giving credit to contributors and curators of datasets. He presented his research in automating citation generation for an RDF dataset called eagle-i, and discussed a means to generalize this citation framework across a variety of different types of databases. Next, Sandipan Sikdar presented "Influence of Reviewer Interaction Network on Long-term Citations: A Case Study of the Scientific Peer-Review System of the Journal of High Energy Physics" (Best Student Paper Award Nominee). He presented his research which sought to answer the question: "Could the peer review system be improved?" amid a consensus from the research community that it is indispensable but flawed. His research attempted to answer this question by introducing a new reviewer-reviewer interaction network, showing that structural properties of this network surprisingly serve as strong predictors of the long-term citations of a submitted paper. Finally Dr. Martin Klein, presented: "Discovering Scholarly Orphans Using ORCID". Dr. Martin Klein proposed a new paradigm for archiving scholarly orphans - web-native scholarly objects that are largely neglected by current archival practices. He presented his research which investigated the feasibility of using Open Researcher and Contributor ID (ORCID) as a means for discovering the web identities and scholarly orphans for active researchers.
Here are the slides he presented:
Dr. Salvatore Mele gave the keynote of the day. He discussed the significant impact Preprints have had on research such has the High-Energy Physics domain which has benefited from a rich Preprint culture for more than half a century. He also reported on the results of two studies that aimed to assess the coexistence and complementarity between Preprints and academic journals that are less open.
The 2017 JCDL conference officially concluded with Dr. Ed Fox's announcement of the 2018 JCDL conference to be held at the University of North Texas. 
--Nwala

Monday, June 26, 2017

2017-06-26: IIPC Web Archiving Conference (WAC) Trip Report

Mat Kelly reports on the International Internet Preservation Consortium (IIPC) Web Archiving Conference (WAC) 2017 in London, England.                            

In the latter part of Web Archiving Week (#waweek2017) from Wednesday to Friday, Sawood and I attended the International Internet Preservation Consortium (IIPC) Web Archiving Conference (WAC) 2017, held jointly with the RESAW Conference at the Senate House and British Library Knowledge Center in London. Each of the three days had multiple tracks. Reported here are the presentations I attended.

Prior to the keynote, Jane Winters (@jfwinters) of University of London and Nicholas Taylor (@nullhandle) welcomed the crowd with admiration toward the Senate House venue. Leah Lievrouw (@Leah53) from UCLA then began the keynote. In her talk, she walked through the evolution of the Internet as a medium to access information prior to and since the Web.

With reservation toward the "Web 3.0" term, Leah described a new era in the shift from documents to conversations, to big data. With a focus toward the conference, Leah described the social science and cultural break down as it has applied to each Web era.

After the keynote, two concurrent presentation tracks proceeded. I attended a track where Jefferson Bailey (@jefferson_bail) presented "Advancing access and interface for research use of web archives". First citing an updated metric of the Internet Archive's holdings (see Ian's tweet below), Jefferson provided a an update on some contemporary holdings and collections by IA inclusive of some of the details on his GifCities project (introduced with IA's 20th anniversary, see our celebration), which provides searchable access to the the archive's holdings of the animated GIFs that once resided on Geocities.com.

In addition to this, Jefferson also highlighted the beta features of the Wayback Machine, inclusive of anchor text-based search algorithm, MIME-type breakdown, and much more. He also described some other available APIs inclusive of one built on top of WAT files, a metadata format derived from WARC.

Through recent efforts by IA for their anniversary, they also had put together a collection of military PowerPoint slide decks.

Following Jefferson, Niels Brügger (@NielsBr) lead a panel consisting of a subset of authors from the first issue of his journal, "Internet Histories". Marc Weber stated that the journal had been in the works for a while. When he initially told people he was looking at the history of the Web in the 1990s, people were puzzled. He went on to compare the Internet to be in its Victorian era as evolved from 170 years of the telephone and 60 years of being connected through the medium. Of the vast history of the Internet we have preserved relatively little. He finished with noting that we need to treat history and preservation as something that should be done quickly, as we cannot go back later to find the materials if they are no preserved.

Steve Jones of University of Illinois at Chicago spoke second about the Programmed Logic for Automatic Teaching Operations (PLATO) system. There were two key interests, he said, in developing for PLATO -- multiplayer games and communication. The original PLATO lab was in a large room and because of laziness, they could not be bothered to walk to each other's desks, so developed the "Talk" system to communicate and save messages so the same message would not have to be communicated twice. PLATO was not designed for lay users but for professionals, he said, but was also used by university and high school students. "You saw changes between developers and community values," he said, "seeing development of affordances in the context of the discourse of the developers that archived a set of discussions." Access to the PLATO system is still available.

Jane Winters presented third on the panel stating that there is a lot of archival content that has seen little research engagement. This may be due to continuing work on digitizing traditional texts but it is hard to engage with the history of the 21st century without engaging with the Web. The absence of metadata is another issue. "Our histories are almost inherently online", she said, "but they only gain any real permanence through preservation in Web archives. That's why humanists and historians really need to engage with them."

The tracks then joined together for lunch and split back into separate sessions, where I attended the presentation, "A temporal exploration of the composition of the UK Government Web Archive". In this presentation they examined the evolution of the UK National Archives (@uknatarchives). This was followed by a presentation by Caroline Nyvang (@caobilbao) of the Royal Danish Library that examined current web referencing practices. Her group proposed the persistent web identifier (PWID) format for referencing Web archives, which was eerily familiar to the URI semantics often used in another protocol.

Andrew (Andy) Jackson (@anjacks0n) then took the stage to discuss the UK Web Archive's (@UKWebArhive) catalog and challenges they have faced while considering the inclusion of Web archive material. He detailed a process, represented by a hierarchical diagram, to describe the sorts of transformations required in going from the data to reports and indexes about the data. In doing so, he also juxtaposed and compared his process with other archival workflows that would be performed in a conventional library catalog architecture.

Following Andy, Nicola Bingham (@NicolaJBingham) discussed curating collections at the UK Web Archive, which has been archiving since 2013, and challenges in determine the boundaries and scope of what should be collected. She encouraged researchers to engage to shape their collections. Their current holdings consist of about 400 terabytes with 11 to 12 billion records, growing 60 to 70 terabytes and 3 billion records per year. Their primary mission is to collect UK web sites under UK TLDs (like .uk, .scot, .cymru, etc). Domains are currently capped at 512 megabytes being preserved but even then other technical limitations exists in capture like proprietary formats, plugins, robots.txt, etc).

When Nicola finished, there was a short break. Following that, I traveled upstairs of the Senate House to the "Data, process, and results" workshop, lead by Emily Maemura (@emilymaemura). She first described three different research projects where each of the researchers were present and asked attendees to break out into groups to discuss the various facets of each project in detail with each researcher. I opted to discuss Frederico Nanni's (@f_nanni) work with him and a group of other attendees. His work consisted of analyzing and resolving issues in the preservation of the web site of the University of Bologna. The site specifies a robots.txt exclusion, which makes the captures inaccessible to the public but through his investigation and efforts, was able to change their local policy to allow for further examination of the captures.

With the completion of the workshop, everyone still in attendance joined back together in the Chancellor's Hall of the Senate House as Ian Milligan (@ianmilligan1) and Matthew Weber (@docmattweber) gave a wrap up of the Archives Unleashed 4.0 Datathon, which had occurred prior to the conference on Monday and Tuesday. Part of the wrap-up was time given to three top ranked projects as determined by judges from the British Library. The group with which I was a part from the Datathon, "Team Intersection" was one of the three, so Jess Ogden (@jessogden) gave a summary presentation. More information on our intersection analysis between multiple data sets can be had on our GitHub.io page. A blog post with more details will be posted here in the coming days detailing our report of the Datathon.

Following the AU 4.0 wrap-up, the audience moved to the British Library Knowledge Center for a panel titled, "Web Archives: truth, lies and politics in the 21st century". I was unable to attend this, opting for further refinement of the two presentations I was to give on the second day of IIPC WAC 2017 (see below).

Day Two

The second day of the conference was split into three concurrent tracks -- two at the Senate House and a third at the British Library Knowledge Center. Given I was slated to give two presentations at the latter (and the venues were about 0.8 miles apart), I opted to attend the sessions at the BL.

Nicholas Taylor opened the session with the scope of the presentations for the day and introduced the first three presenters. First on the bill was Andy Jackson with "Digging document out of the web archives." This initially compared this talk to the one he had given the day prior (see above) relating to the workflows in cataloging items. In the second day's talk, he discussed the process of the Digital ePrints team and the inefficiencies of its manual process for ingesting new content. Based on this process, his team setup a new harvester that watches targets, extracts the document and machine-readable metadata from the targets, and submits it to the catalog. Still though, issues remainder with one being what to identify as the "publication" for e-prints relative to the landing page, assets, and what is actually cataloged. He discussed the need for further experimentation using a variety of workflows to optimize the outcome for quality and to ensure the results are discoverable and accessible and the process remain mostly automated.

Ian Milligan and Nick Ruest (@ruebot) followed Andy with their presentation on making their Canadian web archival data sets easier to use. "We want web archives to be used on page 150 in some book.", they said, reinforcing that they want the archives to inform the insights instead of the subject necessarily being about the archives themselves. They also discussed their extraction and processing workflow from acquiring the data from Internet Archive then using Warcbase and other command-line tools to make the data contained within the archives more accessible. Nick said that since last year when they presented webarchives.ca, they have indexed 10 terabytes representative of over 200 million Solr docs. Ian also discussed derivative datasets they had produced inclusive of domain and URI counts, full-text, and graphs. Making the derivative data sets accessible and usable by researchers is a first step in their work being used on page 150.

Greg Wiedeman (@GregWiedeman) presented third in the technical session by first giving context of his work at the University at Albany (@ualbany) where they are required to preserve state records with no dedicated web archives staff. Some records have paper equivalents like archived copies of their Undergraduate Bulletins while digital versions might consist of Microsoft Word documents corresponding to the paper copies. They are using DACS to describe archives, so questioned whether they should use it for Web archives. On a technical level, he runs a Python script to look at their collection of CDXs, which schedules a crawl which is displayed in their catalog as it completes. "Users need to understand where web archives come from,", he says, "and need provenance to frame their research questions, which will add weight to their research."

A short break commenced, followed by Jefferson Bailey presenting, "Who, what when, where, why, WARC: new tools at the Internet Archive". Initially apologizing for repetition of his prior days presentation, Jefferson went into some technical details of statistics IA has generated, APIs they have to offer, and new interfaces with media queries of a variety of sorts. They also have begun to use Simhash to identify dissimilarity between related documents.

I (Mat Kelly, @machawk1) presented next with "Archive What I See Now – Personal Web Archiving with WARCs". In this presentation I described the advancements we had made to WARCreate, WAIL, and Mink with support from the National Endowment for the Humanities, which we have reported on in a few prior blog posts. This presentation served as a wrap-up of new modes added to WARCreate, the evolution of WAIL (See Lipstick or Ham then Electric WAILs and Ham), and integration of Mink (#mink #mink #mink) with local Web archives. Slides below for your viewing pleasure.

Lozana Rossenova (@LozanaRossenova) and Ilya Kreymer (@IlyaKreymer) talked next about Webrecorder and namely about remote browsers. Showing a live example of viewing a web archive with a contemporary browser, technologies that are no longer supported are not replayed as expected, often not being visible at all. Their work allows a user to replicate the original experience of the browser of the day to use the technologies as they were (e.g., Flash/Java applet rendering) for a more accurate portrayal of how the page existed at the time. This is particularly important for replicating art work that is dependent on these technologies to display. Ilya also described their Web Archiving Manifest (WAM) format to allow a collection of Web archives to be used in replaying Web pages with fetches performed at the time of replay. This patching technique allows for more accurate replication of the page at a time.

After Lozana and Ilya finished, the session broke for lunch then reconvened with Fernando Melo (@Fernando___Melo) describing their work at the publicly available Portuguese Web Archive. He showed their work building an image search of their archive using an API to describe Charlie Hebdo-related captures. His co-presenter João Nobre went into further details of the image search API, including the ability to parameterize the search by query string, timestamp, first-capture time, and whether it was "safe". Discussion from the audience afterward asked of the pair what their basis was of a "safe" image.

Nicholas Taylor spoke about recent work with LOCKSS and WASAPI and the re-architecting of the former to open the potential for further integration with other Web archiving technologies and tools. They recently built a service for bibliographic extraction of metadata for Web harvest and file transfer content, which can then be mapped to the DOM tree. They also performed further work on an audit and repair protocol to validate the integrity of distributed copies.

Jefferson again presented to discuss IMLS funded APIs they are developing to test transfers using WASAPI to their partners. His group ran surveys to show that 15-20% of Archive-It users download their WARCs to be stored locally. Their WASAPI Data Transfer API returns a JSON object derived from the set of WARCs transfered inclusive of fields like pagination, count, requested URI, etc. Other fields representative of an Archive-It ID, checksums, and collection information are also present. Naomi Dushay (@ndushay) then showed a video of an overview of their deployment procedure.

After another short break, Jack Cushman & Ilya Kreymer tag-teamed to present, "Thinking like a hacker: Security Issues in Web Capture and Playback". Through a mock dialog, they discussed issues in securing Web archives and a suite of approaches challenging users to compromise a dummy archive. Ilya and Jack also iterated through various security problems that might arise in serving, storing, and accessing Web archives inclusive of stealing cookies, frame highjacking to display a false record, banner spoofing, etc.

Following Ilya and Jack, I (@machawk1, again) and David Dias (@daviddias) presented, "A Collaborative, Secure, and Private InterPlanetary WayBack Web Archiving System using IPFS". This presentation served as follow-on work from the InterPlanetary Wayback (ipwb) project Sawood (@ibnesayeed) had originally built at the Archives Unleashed 1.0 then presented at JCDL 2016, WADL 2016, and TPDL 2016. This work, in collaboration with David of Protocol Labs, who created the InterPlanetary File System (IPFS), was to display some advancements both in IPWB and IPFS. David began with an overview of IPFS, what problem its trying to solve, its system of content addressing, and mechanism to facilitate object permanence. I discussed, as with previous presentations, IPWB's integration of web archive (WARC) files with IPFS using an indexing and replay system that utilize the CDXJ format. One item in David's recent work is bring IPFS to the browsers with his JavaScript port to interface with IPFS from the browsers without the need for a running local IPFS daemon. I had recent introduced encryption and decryption of WARC content to IPWB, allowing for further permanence of archival Web data that may be sensitive in nature. To close the session, we performed a live demo of IPWB consisting of data replication of WARCs from another machine onto the presentation machine.

Following our presentation, Andy Jackson asked for feedback on the sessions and what IIPC can do to support the enthusiasm for open source and collaborative approaches. Discussions commenced among the attendees about how to optimize funding for events, with Jefferson Bailey reiterating the travel eats away at a large amount of the cost for such events. Further discussions were had about why the events we not recorded and on how to remodel the Hackathon events on the likes of other organizations like Mozilla's Global Sprints, the organization of events by the NodeJS community, and sponsoring developers for the Google Summer of Code. The audience then had further discussions on how to followup and communicate once the day was over, inclusive of the IIPC Slack Channel and the IIPC GitHub organization. With that, the second day concluded.

Day 3

By Friday, with my presentations for the trip complete, I now had but one obligation for the conference and the week (other than write my dissertation, of course): to write the blog post you are reading. This was performed while preparing for JCDL 2017 in Toronto the following week (that I attended by proxy, post coming soon). I missed out on the morning sessions, unfortunately, but joined in to catch the end of João Gomes' (@jgomespt) presentation on Arquivo.pt, also presented the prior day. I was saddened to know that I had missed Martin Klein's (@mart1nkle1n) "Uniform Access to Raw Mementos" detailing his, Los Alamos', and ODU's recent collaborative work in extending Memento to support access to unmodified content, among other characteristics that cause a "Raw Memento" to be transformed. WS-DL's own Shawn Jones (@shawnmjones) has blogged about this on numerous occasions, see Mementos in the Raw and Take Two.

The first full session I was able to attend was Abbie Grotke's (@agrotke) presentation, "Oh my, how the archive has grown..." that detailed the progress and size that Library of Congress's Web archive has experienced with minimal growth in staff despite the substantial increase in size of their holdings. While captivated, I came to know via the conference Twitter stream that Martin's third presentation of the day coincided with Abbie's. Sorry, Martin.

I did manage to switch rooms to see Nicholas Taylor discuss using Web archives in legal cases. He stated that in some cases, social media used by courts may only exist in Web archives and that courts now accept archival web captures as evidence. The first instance of using IA's Wayback Machine was in 2004 and its use in courts has been contested many times without avail. The Internet Archive provided affidavit guidance that suggested asking the court to ensure usage of the archive will consider captures as valid evidence. Nicholas alluded to FRE 201 that allows facts to be used as evidence, the basis for which the archive has been used. He also cited various cases where expert testimony of Web archives was used (Khoday v. Symantec Corp., et al.), a defamation case where the IA disclaimer dismissed using it as evidence (Judy Stabile v. Paul Smith Limited et al.), and others. Nicholas also cited WS-DL's own Scott Ainsworth's (@Galsondor) work on Temporal Coherence and how a composite memento may not have existed as displayed.

Following Nicholas, Anastasia Aizman and Matt Phillips (@this_phillips) presented "Instruments for Web archive comparison in Perma.cc". In their work with Harvard's Library Innovation Lab (with which WS-DL's Alex Nwala was recently a Summer fellow), the Perma team has a goal to allow users to cite things on the Web, create WARCs of those things, then be able to organize the captures. Their initial work with the Supreme Court corpus from 1996 to present found that 70% of the references had rotted. Anastasia asked, "How do we know when a web site has changed and how do we know which changed are important?"

They used a variety of ways to determine significant change inclusive of MinHas (via calculating the Jaccard Coefficients), Hamming Distance (via SimHash), and Sequence Matching using a Baseline. As a sample corpus, they took over 2,000 Washington Post articles consisting of over 12,000 resources, examined the SimHash and found big gaps. For MinHash, the distances appeared much closer. In their implementation, they show this to the user on Perma via their banner that provides an option to highlight file changes between sets of documents.

There was a brief break then I attended a session where Peter Webster (@pj_webster) and Chris Fryer (@C_Fryer) discussed their work with the UK Parliamentary Archives. Their recent work consists of capturing official social media feeds of the members of parliament, critical as it captures their relationship with the public. They sought to examine the patterns of use and access by the members and determine the level of understanding of the users of their archive. "Users are hard to find and engage", they said, citing that users were largely ignorant about what web archives are. In a second study, they found that users wanted a mechanism for discovery that mapped to an internal view of how the parliament function. Their studies found many things from web archives that user do not want but a takeaway is that they uncovered some issues in their assumptions and their study raised the profile of the Parliamentary Web Archives among their colleagues.

Emily Maemura and Nicholas Worby presented next with their discussion on origin studies as it relates to web archives, provenance, and trust. They examined decisions made in creating collections in Archive-It by the University of Toronto Libraries, namely the collections involving the Canadian political parties, the Toronto 2015 Pam Am games, and their Global Summitry Archive. From these they determined the three traits of each were that they were long running, a one-time event, and a collaboratively created archive, respectively. For the candidates' sites, they also noticed the implementation of robots.txt exclusions in a supposed attempt to prevent the sites from being archived.

Alexis Antracoli and Jackie Dooley (@minniedw) presented next about their OCLC Research Library Partnership web archive working group. Their examination determined that discoverability was the primary issue for users. Their example of using Archive-It at Princeton but that the fact was not documented was one such issue. Through their study they established use cases for libraries, archives, and researchers. In doing so, they created a data dictionary of characteristics of archives inclusive of 14 data elements like Access/rights, Creator, Description, etc. with many fields having a direct mapping to Dublin Core.

With a short break, the final session then began. I attended the session where Jane Winters (@jfwinters) spoke about increasing the visibility of web archives, asking first, "Who is the audience for Web archives?" then enumerating researchers in the arts, humanities and social sciences. She then described various examples in the press relating to web archives inclusive of Computer Weekly report on Conservatives erasing official records of speeches from IA and Dr. Anat Ben-David's work on getting the .yu TLD restored in IA.

Cynthia Joyce then discussed her work in studying Hurricane Katrina's unsearchable archive. Because New Orleans was not a tech savvy place at the time and it was pre-Twitter, Facebook was young, etc., the personal record was not what it would be were the events to happen today. In her researcher as a citizen, she attempted to identify themes and stories that would have been missed in mainstream media. She said, "On Archive-It, you can find the Katrina collection ranging from resistance to gratitude." Only 8-9 years later did she collect the information, for which many of the writers never expect to be preserved.

For the final presentation of the conference, Colin Post (@werrthe) discussed net-based art and how to go about making them objects of art history. Colin used Alexi Shulgin's "Homework" as an example that uses pop-ups and self-conscious elements that add to the challenge of preservation. In Natalie Bookchin's course, Alexei Shulgin encouraged artists to turn in homework for grading, also doing so himself. His assignment is dominated with popups, something we view in a different light today. "Archives do not capture the performative aspect of the piece", Colin said. Citing oldweb.today provides interesting insights into how the page was captured over time with multiple captures being combined. "When I view the whole piece, it is emulated and artificial; it is disintegrated and inauthentic."

Synopsis

The trip proved very valuable to my research. Not documented in this post was the time between sessions where I was able to speak to some of the presenters about their as it related to my own and even to those that were not presenting in finding an intersection in our respective research.

Mat (@machawk1)