Brief summary of current projects while remote working:
Managing some dB cleanup tasks: Checking and cleaning URLs for migrated e-resources that never got cleaned up after a platform migration by the vendor; Cleaning up basic e-resource cataloging procedures (because of overlapping tasks); Responding to questions that come from these procedures and processes
Reworking LD4P/Sinopia Profiles and Templates: After some changes made by the cohort of libraries in the LD4P project, those of us working and developing Linked-data-for-Production in Sinopia, a BIBFRAME editor and template builder for using this new schema, we realized we needed to update some templates for Series, Administrative Metadata to track provenance of the data for resources, as well as some ongoing efforts to add persistent data to avoid re-keying or re-searching for URIs
Working up Review for Publication: I am reviewing an item in the READING AUGUSTINE series, On Agamben, Arendt, Christianity and the Dark Arts of Civilization / by Peter Iver Kaufman. This work, fundamentally, compares, these three thinkers’ notion of ‘pilgrims,’ ‘refugees,’ and ‘pariahs,’ to draw some political/social continuities (and dis-continuities) between the three paradigms (which of course largely split over worldview).
And Plenty O’ other things…
Hello metadata/cataloging friends,
I am gathering observations and technical information at a larger scale on BIBFRAME with this survey – which is intended to see where folks and institutions are with BIBFRAME, its implementation in Sinopia, how departments are getting trained, and like questions. In fact, this will become, I hope, the beginning of a broader education and research project.
I am titling this project: ‘BIBFRAME Metadata Production : A Primer’
And is being funded, in small part, by a grant by AALL (American Association for Law Libraries) – which covers the acquisition of a few resources.
The Survey is 10 (ten) questions, most will be relevant, in some way, to each metadata/cataloging department in libraries doing any significant metadata production in their respective institutions. Feel free to write as much for each question as is needed to answer it.
Link to survey here.
I am currently reading: Coding with XML for Efficiencies in Cataloging and Metadata by Timothy W. Cole, Myung-Ja (MJ) K. Han, and Christine Schwartz.
I have been working with XML for a few years now in different contexts – but I have always been working with the structure itself – not its search, transformation, or display.
The other key relevance to work is a digital preservation platform we are migrating to from contentDM may force us, for certain types of digitally preserved materials, to write XSL. If we need this, I will be able to help more now. 🙂
Thank you to the authors for this book
The Library of Congress publishes a set of the standard Geographic Codes that go into catalogue Machine Readable Catalogue (MARC) records. This is now called MARC21 and is just one of the record formats that metadata technicians and cataloguers use to create and structure information in a networked environment.
These are the records you see when you search your local library for items.
The purpose of the geographic codes is to add a fixed version in standardized code form to reflect any geographic subject or relationship of the work in hand. It is actually these fixed forms of information that the computer reads.
For example: We read the words that say “United States,” “Indonesia, “Iran” or “Kenya” in the subject section of the item-record in the catalogue. This text is usually hyper-linked because we can use it to browse other resources that are categorized under the same subjects or have similar geographic relationships. These terms are often added to the 650 or 651 field in the MARC21 records.
But the computer needs standardized forms of this information in order to organize properly. Thus, “United States” is read by the computer as : ‡an-us—, Iran is read as : ‡aa-ir— and Kenya is read as : ‡af-ke— These code marks all go, as many as are needed, into the 043 field.
There is even a code for the whole Earth : ‡ax—— and for the Solar System : ‡azs—–
But the Marc21 Geographic Code list, for all it might be criticized for, is missing a fundamental geographic representation – that of any code to reflect the Internet or any time-space segment of reality connected with CyberSpace.
I think it’s high-time to repair this gaping hole in the Code List.
We need a geographic code that represents the Internet, Cyberspace, the World Wide Web (Inter-webs) – whatever you call it in your language – as a discrete and specific geographic location.
I have ideas too…
We can’t use ‡ai—— (which could stand for Internet) because that is claimed for Indian Ocean. And we can’t claim ‡ac—— (which could stand for Cyberspace) because this has already been claimed for Intercontinental areas (Western Hemisphere). No, we need another code.
And I just happen to have found a gap in the code sequence allowing the perfect code to slot in.
I think we should use : ‡ait—– (to represent the Internet). 🙂 Not only does this code arrangement reflect major letters in “Internet,” but it also accomplishes a secondary goal of reflecting the work of the Internet itself : IT (Information Technology).
This is revolutionary…
Who’s with me?
Thanks for reading.
Let me draw attention to this post. Of course, the post itself is its own attention-getter. But let me add a little point. In the right of this blog are two sections. One is called “What it’s All About” and the other is called “Non-Hierarchical Metadata (Tags).” I refer to these two sections, combined, as #Metadata.
Both of these sections are designed to suggest pathways a reader can use to browse or click from post-to-post with the hopes that one can read more and more. Each blog post has at its very top a list of categories and tags one can use to think about the blog post while reading and to then click to follow to other posts. Please feel free to make use of this feature and try it out.
Thank you for reading.
This entry was posted in digital librarianship, knowledge organization, libraries, library classification, library design, machine readable information, metadata, subject access, taggings and tagged discovery tools, knowledge organization, metadata, subject access, tagging, tags.
From CHICAGO’S McCormick Place/Convention Center: Watching traffic heading south along Lake Michigan with the lake a mere 200 yards in the distance – beautiful.
Vendors and exhibitors are currently setting up for 5 days of learning and connecting on all things library. This will include books, ebooks and author events of course. Many publishers are even in attendance. But there are lots of technology vendors as well as committee meetings engaging in “think-tank” planning for the future of academic, public and school libraries’ futures. This exhibition/conference will bring together the current and proposed best practices in technical and patron services.
It’s not too late to register. I for one am excited.
Make sure to follow Ala Annual 2013 events on Twitter with the hashtag: #ala2013
I’ll be tweeting through the event from @jltaglich and @meta21st
Don’t hesitate to chat or express all thoughts.
Thanks for reading.
This entry was posted in ALA, internet, knowledge organization, librarianship, libraries, library, library classification, library collections, library design, machine readable information, MARC, metadata, networking, public libraries, reference works, social media, subject access, Technology in Libraries.