DIGIMAG Journal issue 76 – SMART MACHINES FOR ENHANCED ARTS

digimag img

Artificial Intelligence (AI) and Machine Learning (ML) might be considered by many as synonyms, also because they are the buzzwords of this decade. But actually they are not. They both question though, the ability of the machines to perform and complete tasks in a “smart” way, challenging human intelligence and specificity.

With machines becoming more and more intelligent, Machine Learning is nowadays not only an interesting and challenging topic, but also a crucial discipline. If initially computing was just a matter of calculations, now it has moved beyond simple “processing” and implies also “learning”. In the age of Big Data and IoT, machines are asked to go beyond pure programming and algorithms procedures, introducing also predictions of data, OCR and semantic analysis, learning from past experiences and adapting to external inputs, reaching out the domain of human productions and processes.

As Gene Kogan and Francis Tseng write in their in-development book “Machine Learning for Artists”, we can “pose today to machines a single abstract problem: determine the relationship between our observations or data, and our desired task. This can take the form of a function or model which takes in our observations, and calculates a decision from them. The model is determined from experience, by giving it a set of known pairs of observations and decisions. Once we have the model, we can make predicted outputs””.

So, the subject of Machine Learning and Artificial Intelligence methods more in general, are going thusly much further the technology or science fields, impacting also arts, product design, experimental fashion and creativity in general. As ML features can fit with digital arts practices, we’re lead to explore the way some AI techniques can be used to enhance human performative gestures and creativity models.

How biological systems and machine intelligence can collaborate to create art, and which is the cultural outcome for our society? Which is the new role of creativity in this scenario? How the contemporary will face a future generation of automated artificial artists/designers, able to learn from the creatives themselves, or to have a direct impact on human creativity? Will the anthropocentric vision of the creative process behind the artistic creation, affected by new intelligent Neural Networks?

With this call Digicult aims at researching contributions on the mentioned topic, especially from individuals active in the artistic and academic fields (curators, critics, hackers, fabbers, creative producers, lab managers, activists, designers, theorists, independent and academic writers, scholars, artists, etc.).

Deadline: 01 September 2017

More info: http://www.digicult.it/digimag-journal/

digimag

About DIGIMAG

Digimag Journal is an interdisciplinary online publication seeking high-standard articles and reviews that focus on the impact of the last technological and scientific developments on art, design, communication and creativity. Following the former Digimag Magazine, it is based on international call for papers on given subjects and provides readers with comprehensive accounts of the latest advancements in the international digital art and culture scene. It is published by Digicult Editions, for free as Pdf, Epub, Mobi and in print on demand.


Call for artists: LE MERAVIGLIE DEL POSSIBILE

On December 2017, Kyber Teatro organises in Cagliari (Italy) the fourth edition of International Theatre, Art and New Technologies Festival

Le meraviglie del Possibile

LMDP1

LMDP Festival is the first of this kind in the whole Italy. It aim to promote the interrelation between artistic and technological languages.

The Kyber Teatro – spin off of L’Aquilone di Viviana theatre company, creator and manager of the Festival, addresses to Italian and International artists an Open Call to submit their projects about “Interaction between arts and technology”.

Who can attend

The participation is open to artists of every nationality, working individually or in a group.

Eligible projects

  • Theatrical plays, performances.
  • Installations that explore and realize the interaction between artwork, exhibition space and observers with the contribution of technology.

Application (deadline 21st September 2017)

The theme of the fourth edition of LMDP Festival is the interrelation between theatre, arts and  new technology.

The application must contain:

• Artist’s CV;

• Detailed description of the project (in PDF);

• Technical rider;

• Selection of max 5 photos;

• Link audio / video material (Vimeo or Youtube).

The result is going to be notified only to selected projects by the 1st October 2017.

IMMAGINE LMDP4

Publication

Applying for the call, artists agree that the projects should be represented at the Festival. Selected artists must provide a short biography and an abstract of the project. They also agree that the material related to the project could be published on the Festival website and/or presented to the press for promotional purposes.

 

Archiving process

Artists authorise Kyber Teatro – L’aquilone di Viviana to present their work, to store the material and make it accessible through the Festival’s website. All rights to the artwork and images will remain to the artist. The Organization is also entitled to document the event in all its phases through audio recordings, video or images.

 

Application materials must be sent to: info@kyberteatro.it

 

Kyber Teatro – L’Aquilone di Viviana Soc. Coop.

Via Newton 12, 09131 Cagliari

Tel: +39 0708607175 – Mob: + 39 3470484783

info@kyberteatro.it

 


TWA cultural heritage Digitisation Grant 2017 for UK-based digitisation projects

Following a successful 2016 and excellent bids from archives and other memory institutions last year, the TWA Digitisation Grant has relaunched with a fresh tranche of funding in 2017.

The fund offers grants of up to £5000 for UK archives, special collections libraries and museums to digitise their collections.

Last year’s esteemed judging panel will return to assess the grant bids and select the winners: including ARA chief executive – John Chambers; HLF appointed special advisor – Claire Adler; and senior digitisation consultant at TownsWeb Archiving – Paul Sugden.

The Grant can be used to fund the digitisation of bound books, manuscripts, oversize maps and plans, 35mm slides, microfilm/fiche, glass plate negatives, and other two-dimensional cultural heritage media. It can also be used to fund opening up access to heritage collections online.

The deadline for applications is 7th July 2017.

How to apply and more details at:

https://www.townswebarchiving.com/twa-digitisation-grant/

Matt-Scanning-Digitisation-Grant-2017


Europeana 1914-1918 thematic collection launches during Europeana Transcribathon Campus Berlin 2017

Officially launching the new Europeana 1914-1918 thematic collection, Europeana Transcribathon Campus Berlin 2017 marks the next milestone for the crowdsourcing digital archive dedicated to the historical conflict, and puts a spotlight on the involvement of its community.

On 22 and 23 June, the Berlin State Library will host the Europeana Transcribathon Campus Berlin 2017. Over two days, teams from three generations and several European countries  will compete to digitally transcribe as many World War One documents as possible, and link them to other historical sources such as early 20th century newspapers. Transcribathons gather people from across Europe and online to create digital versions of handwritten items found on Europeana 1914-1918. These innovative events are the latest crowdsourcing initiative enriching the Europeana 1914-1918 digital archive. Since their launch in November 2016, more than several million characters and 12,000 documents, from love letters to poems, have been transcribed.

Frank Drauschke, of Europeana 1914-1918 project team says: “Most sources on Europeana 1914-1918 are written by hand, and often hard to decipher. Transcribathon aims to help us ‘polish’ a raw diamond by this making private memorabilia readable online. We utilise the power of our community to transcribe as many private stories and documents from diverse languages and regions of Europe and make them available to the  public.”

These unique resources found on Europeana 1914-1918 have been collected and digitized since 2011 during collection days and online uploads inviting people to submit their personal documents. During  Europeana Transcribathon Campus Berlin 2017, Europeana 1914-1918, previously living on a separate website, will officially move platform and re-launch as a new Europeana thematic collection. This move onto the Collections site aims to broaden the current audience by opening up World War One related content to all Europeana visitors and to enrich their experience. People can now discover digital versions of testimonies handwritten 100 years ago, complemented by millions of digitized newspapers and documents provided by libraries and archives. Linking user generated content with other historical sources makes it possible to view them within the bigger picture. And thanks to the ability to search across the Europeana platform, people can now also easily access related items from the other four thematic collections: Europeana Art, Music, Fashion and Photography.

Europeana Transcribathon Campus Berlin 2017 is organized by Europeana, Facts & Files and the Berlin State Library, in cooperation with the German Digital Library and Wikimedia.

transcribathon

Europeana is Europe’s digital platform for cultural heritage, collecting and providing online access to tens of millions of digitized items from over 3,500 libraries, archives, audiovisual collections and museums across Europe, ranging from music, books, photos and paintings to television broadcasts and 3D objects. Europeana encourages and promotes the creative reuse of these vast cultural heritage collections in education, research, tourism and the creative industries.

Europeana Collections are the result of a uniquely collaborative model and approach: the web platform is provided by Europeana, the content comes from institutions across Europe, while consortiums provide the theme and editorial expertise to bring the content alive for visitors through blogs and online exhibitions.

Europeana 1914-1918 is a thematic collection that started as a joint initiative between the Europeana Foundation, Facts & Files, and many other European partner institutions. It originates from an Oxford University project in 2008. Since 2011, over 200,000 personal records have been collected, digitized and published. These events have now expanded to over 24 countries across Europe, building up an enthusiastic community of about 9,000 people.

Europeana Transcribe is a crowdsourcing initiative that allows the public to add their own transcriptions, annotations and geo-tags to sources from Europeana 1914-1918. Developed by Facts & Files and Olaf Baldini, piktoresk, the website is free to use and open to all members of the public. New contributors can now register and submit their own stories within the Europeana Collections site.

Europeana Newspapers is making historic newspaper pages searchable, in creating full-text versions of about 10 million newspaper pages. www.europeana-newspapers.eu

Europeana DSI is co-financed by the European Union’s Connecting Europe Facility


International Survey on Advanced documentation of 3D Digital Assets

The e-documentation of Cultural Heritage (CH) assets is inherently a multimedia process and a great challenge, addressed through digital representation of the shape, appearance and conservation condition of the heritage/cultural object for which 3D digital model is expected to become the representation. 3D reconstructions should progress beyond current levels to provide the necessary semantic information (knowledge/story) for in-depth studies and use by researchers, and creative users, offering new perspectives and understandings. Digital surrogates can add a laboratory dimension to on-site explorations originating new avenues in the way tangible cultural heritage is addressed.

The generation of high quality 3D models is still very demanding, time-consuming and expensive, not at least because the modelling is carried out for individual objects rather than for entire collections and formats provided in digital reconstructions/representations are frequently not interoperable and therefore cannot be easily accessed and/or re-used or preserved.

survey

This 15-20 minutes long survey aims to gather your advice concerning the current and future e-documentation of 3D CH objects. We would appreciate your taking the time to complete it.

Please access the survey HERE

Your responses are voluntary and will be confidential. Responses will not be identified by individual. All responses will be compiled together and analyzed as a group. The results of this survey will be published before the end of this year on the channels of Europeana Profesional (pro.europeana.eu), CIPA (Comité International de Photogrammétrie Architecturale – http://cipa.icomos.org/), Digital Heritage Research Lab (http://digitalheritagelab.eu/dhrlab/lab-overview/) and Digitale Rekonstruktion (http://www.digitale-rekonstruktion.info/uber-uns/).


VIEW Journal Celebrates Fifth Anniversary with New Interface

VIEW Journal started five years ago as the first peer-reviewed, multimedia and open access e-journal in its field. The online open access journal now has a fresh new look. Its new interface makes reading and navigation easier. More importantly, it now offers room for discussion – with the possibility to leave comments and responses under every article. Articles still feature embedded audiovisual sources. The journal continues to provide an online reading experience fit for a 21st century media journal.

view

Fifth Anniversary

VIEW Journal was started by EUscreen and the European Television History Network. It is published by the Netherlands Institute for Sound and Vision in collaboration with Utrecht University, Université du Luxembourg and Royal Holloway University of London. A heartfelt thank you goes to the support of all authors, the editorial board, and team, who have worked hard over the years to build up a journal with renown.

For the past five years, VIEW has published two issues per year. The journal’s aim – to offer an international platform in the field of European television history and culture – still stands. It reflects on television as an important part of our European cultural heritage and is a platform for outstanding academic and archival research. The journal was and remains open to many disciplinary perspectives on European television; including but not limited to television history, television studies, media sociology, media studies, and cultural studies.

Issue 10: Non-Fiction Transmedia

With the new design it also proudly presents its 10th issue on Non-fiction Transmedia. This issue was co-edited by Arnau Gifreu-Castells, Richard Misek and Erwin Verbruggen. The issue offers a scholarly perspective on the emergence of transmedia forms; their technological and aesthetic characteristics; the types of audience engagement they engender; the possibilities they create for engagement with archival content; technological predecessors that they may or may not have emerged from; and the institutional and creative milieux in which they thrive.

You can find the full table of contents for the second issue below. We wish you happy reading and look forward to your comments on the renewed viewjournal.eu.

 

Table of Contents

EDITORIAL

DISCOVERIES

EXPLORATIONS


MediaConch in action! Issue #1

Hey Eddy! Introduce yourself please.

Hey Ashley! I’ve recently become a “Denverite” and have started a new job as an Assistant Conservator specializing in electronic media at the Denver Art Museum (DAM). Before that, I was down in Baton Rouge, Louisiana working as a National Digital Stewardship Resident with Louisiana Public Broadcasting (LPB). When I’m not working, I like to listen to podcasts, read comics, play bass, and stare into the endless void that is “twitter” (@EddyColloton). I’m on a big H. P. Lovecraft kick right now so let me know if you have any recommendations (Dunwich Horror is my current fav, but At the Mountains of Madness is a close second).

What does your media ingest process look like? Does your media ingest process include any tests (manual or automated) on the incoming content? If so , what are the goals of those tests?

The ingest procedures I’m using for the Denver Art Museum are pretty different from the ones we worked out at Louisiana Public Broadcasting, for all kinds of reasons. The two institutions have very different types of collections, and they use their repositories very differently, too.

At the Denver Art Museum, ideally, material will enter the digital repository upon acquisition. Ingest procedures need to be able to be tailored to the eccentricities of a particular media artwork, and flexible enough to cover the wide array of media works that we acquire (websites, multi-channel video installations, software-based artworks, or just a collection of tiff files). With this in mind, we’re using Archivematica for ingest of media into our digital repository. It allows us to automate the creation of METS wrapped PREMIS XML documentation, while manually customizing which microservices we choose to use (or not use) as we ingest new works. Some of the microservices I use on a regular basis are file format identification through Siegfried, metadata extraction with MediaInfo, ExifTool, and Droid, and normalization using tools like FFmpeg.

Things couldn’t be more different at LPB. All completed locally produced programming automatically becomes part of the LPB archive. The LPB Archive is then responsible for preserving, describing and making that content accessible through the Louisiana Digital Media Archive (LDMA), located at http://www.ladigitalmedia.org/. LPB’s ingest procedures need to allow for a lot more throughput, but there’s much less variability in the type of files they collect compared to the DAM. With less of a need for manual assessment, LPB uses an automated process to create MediaInfo XML files, an MD5 checksum sidecar, and a MediaConch report through a custom watchfolder application that our IT engineer, Adam Richard, developed. That code isn’t publically available unfortunately, but you can see the scripts that went into it on my GitHub.

Where do you use MediaConch? Do you use MediaConch primarily for file validation, for local policy checking, for in-house quality control, for quality testing for vendor files?

Primarily policy checking, as a form of quality assurance. At LPB, most of our files were being created through automated processes. Our legacy material was digitized using production workflows, to take advantage of existing institutional knowledge. This was very helpful, because we could then repurpose equipment, signal paths, and software. But, using these well worn workflows also meant that occasionally files would be encoded incorrectly, aspect ratio being one of the most common errors. We would check files against a MediaConch policy as a way of quickly flagging such errors, without having to invest time watching and reviewing the file ourselves.

At the Denver Art Museum, we plan to use MediaConch in a similar way. The videotapes in the museum’s collection will be digitized by a vendor. Pre-ingest, I plan to do tests on the files for quality assurance. After fixity checks, I will check to make sure our target encoding and file format was met by the vendor using MediaConch. I intend to use the Carnegie Archive’s python script from their GitHub to automate this process. Once I know that the files are encoded to spec, I will be creating QCTools reports and playing back the files for visual QC. I’ve been following the American Archive of Public Broadcasting’s QC procedures with interest to see if there’s any tricks I can cop from their workflow.

At what point in the archival process do you use MediaConch?

Basically pre-ingest for both LPB and the DAM. When using MediaConch as a policy checker, my goal is to make sure we’re not bothering to ingest a file that is not encoded to spec.

Do you use MediaConch for MKV/FFV1/LPCM video files, for other video files, for non-video files, or something else?

I use MediaConch for MKV/FFV1/LPCM video files and for other types of video files as well. At LPB we were using MediaConch as a policy checker with IMX50 encoded files in a Quicktime wrapper and H.264 encoded files in a .mp4 wrapper. You can find the policies I created for LPB here, and I talk through the rationale of creating those policies in the digital preservation plan that I created for LPB, available here (MediaConch stuff on page 24, and page 45). I’m happy to report that LPB is currently testing a new workflow that will transcode uncompressed .mov files into lossless MKV/FFV1/LPCM files (borrowing heavily from the Irish Film Archive’s lossless transcoding procedures, as well as the CUNY TV team’s “make lossless” microservice).

At the DAM, we’ll be using MediaConch as a policy checker with Quicktime/Uncompressed/LPCM files, and for validation of our MKV/FFV1/LPCM normalized preservation masters.

My understanding is that MediaConch is going to be integrated into Archivematica’s next release. I’m really looking forward to that update, since at the DAM we have decided to create MKV/FFV1/LPCM files for any digital video in the collection that uses a proprietary codec, or an obsolete format. A lot of the electronic media in the museum’s design collection comes from the AIGA Archives, which the DAM collects and preserves. A ton of the video files from the AIGA Archives were created in the aughts, and they use all kinds of whacky codecs – my favorite so far is one that MediaInfo identifies as “RoadPizza” (apparently a QuickTime codec). Given that I don’t want to rely on the long-term support of the RoadPizza codec, we’re normalizing files like that to MKV/FFV1/LPCM through an automated Archivematica micro-service that uses the following FFmpeg script (which I cobbled together using ffmprovisr):

ffmpeg -i input_file -c:v ffv1 -level 3 -g 1 -slicecrc 1 -slices 16 -c:a pcm_s16le output_file.mkv

To be clear we are also keeping the original file, but just transcoding a second version of the file to be cautious.

Through that implementation we intend to use the Archivematica MediaConch microservice to validate the encoding of the video files that we have normalized for preservation.

Why do you think file validation is important the field?

I wish this was a joke but it honestly helps me sleep better at night. MediaConch and melatonin make for a well rested AV archivist/conservator, hah. I like knowing that the video files that we are creating through automated transcoding processes are up to spec, and comply with the standards being adopted by the IETF.

Also, using MediaConch as a policy checker saves me time, and prevents me from missing bonehead mistakes, of which their are loads, because I work with human beings (and possibly some aliens, you never know).

Anything else you’d like to add?

Just want to offer a big thanks to the MediaConch team for everything that they do! I know there’s a pretty big overlap betwixt team Conch, CELLAR, and the QCTools peeps – you’re all doing great work, and regularly making my job easier. So thanks for that.

To read more from Eddy, check out his Louisiana Public Broadcasting Digital Preservation Plan


veraPDF 1.6 released

veraPDF-logo-600-300x149The latest release of veraPDF is available to download. The validation logic and test corpus of veraPDF 1.6 have been updated to comply with the resolutions of the PDF Association’s PDF Validation Technical Working Group (TWG). The TWG brings together PDF technology experts to analyse PDF validation issues in a transparent way. It also connects veraPDF to the ISO committee responsible for PDF/A.

The GUI and command line applications feature a new update checker which lets you know if you are running the latest version of the software. If you’re using the GUI application select “Help->Check for updates”, command line users type “verapdf –version -v” to ensure you have the latest features and fixes.

Other fixes and improvements are documented in the release notes: https://github.com/veraPDF/veraPDF-library/releases/latest

 

Download veraPDF

http://www.preforma-project.eu/verapdf-download.html

 

Help improve veraPDF

Testing and user feedback is key to improving the software. Please download and use the latest release. If you experience problems, or wish to suggest improvements, please add them to the project’s GitHub issue tracker: https://github.com/veraPDF/veraPDF-library/issues  or contact us through our mailing list: http://lists.verapdf.org/listinfo/users.

User guides and documentation are published at: http://docs.verapdf.org/.

 

PREFORMA International Conference  – Shaping our future memory standards

To find out more about veraPDF and the PREFORMA project, join us at the PREFORMA International Conference in Tallinn on 11-12 October 2017. For more information see: http://finalconference.preforma-project.eu/.

 

About

The veraPDF consortium (http://verapdf.org/) is funded by the PREFORMA project (http://www.preforma-project.eu/). PREFORMA (PREservation FORMAts for culture information/e-archives) is a Pre-Commercial Procurement (PCP) project co-funded by the European Commission under its FP7-ICT Programme. The project’s main aim is to address the challenge of implementing standardised file formats for preserving digital objects in the long term, giving memory institutions full control over the acceptance and management of preservation files into digital repositories.


NEM Summit 2017 – call for abstracts

The NEM Summit is an international conference and exhibition, open to co-located events and organised every year since 2008 by the NEM Initiative (New European Media – European Technology Platform – www.nem-initiative.org) for all those interested in broad area of Media and Content. Over the years, the NEM Summit has grown to become an annual not-to-be-missed event, providing attendees with a key opportunity to meet and network with prominent stakeholders, access up-to-date information, discover latest technology and market trends, identify research and business opportunities, and find partners for upcoming EU-funded calls for projects.

nem

The 10th edition of the NEM Summit conference and exhibition will be held in Spanish capital Madrid at the exciting venue of the Museo Reina Sofía. Please, reserve these dates to attend the NEM Summit 2017 and take part in discussions on the latest development in European media, content, and creativity.

NEM Summit 2017 – Call for Extended Abstracts

  • Expected length of the extended abstracts is two A4 pages with possibility to provide further supporting information
  • The extended abstracts have to be submitted until 26 June 2017
  • Fast track evaluations of the received contributions will be applied and results will be known by 26 July 2017
  • More information can be found in the attached Call for Extended Abstracts
  • Submission portal is available on the NEM Initiative website at www.nem-initiative.org

Further details about the NEM Summit 2016, further opportunities to participate and exhibit at the event, and online Summit registration will be provided soon on the NEM Initiative website at www.nem-initiative.org.

Download the call for abstracts (PDF, 91 kb)

 


Photoconsortium Annual Event, hosted by CRDI. Public seminar and general assembly.

Girona-cathedral-PD-694x416img. Ajuntament de Girona, Vista exterior de l’absis de la Catedral de Girona, Public Domain.

test1aHosted by Photoconsortium member CRDI, the 2017 Annual Event of Photoconsortium is organized in the beautiful city of Girona (Spain), seat of an important audiovisual archive of millions photographs, films, hours of video and hours of sound recordings, mostly from private sources.

The archive is managed by CRDI, a body of the city Municipality created in 1997 with the mission to discover, protect, promote, provide and disseminate cinematographic and photographic heritage of the city of Girona.

Photos and follow-up


Friday 9th June 2017

PUBLIC SEMINAR: PHOTOCONSORTIUM INFORMATIVE SESSION

Languages of the seminar: English, Catalan and Spanish

Chair: David Iglésias, Officer of Photoconsortium Association

09.45 Welcome message by Joan Boadas, Director of CRDI

10:00 Prof. Fred Truyen, KU Leuven. President of Photoconsortium Association. Presenting the Photography Collection in Europeana

10:15 Antonella Fresa, Promoter Srl. Vice-president of Photoconsortium Association. Photoconsortium, the expert hub for photography

10.30 Pierre-Edouard Barrault, Operations Officer at Europeana. Publishing in Europeana – Tools, Data & good practices

11:30 Coffee break and networking

12:00 Sílvia Dahl, Laia Foix. Photography deterioration terminology. A proposal to broaden the EuropeanaPhotography vocabulary.

12:30 Pilar Irala. The Jalon Ángel collection.

13:00 Debate

14:00 Lunch

15:00 – 16:00 Visit to the Cinema Museum


On the day before, 8th June, the General Assembly of Photoconsortium members took place.