The 20th International Conference on Cultural Economics

RMIT University  is pleased to host the 20th International Conference on Cultural Economics, presented by the Association for Cultural Economics International (ACEI).

The Conference will be held in Melbourne, Australia, from Tuesday June 26th to Friday June 29th , 2018. The program chair is Prof. Alan Collins (University of Portsmouth, UK), ACEI president-elect.

Conference theme include: creative industries, technological disruption in the arts, international trade in art and culture, cultural festivals, network structures in the arts, culture and sustainable development, digital participation, big data in the arts and culture, sport economics, artistic labour markets, arts and cultural organisations, creative cities, funding the arts, cultural heritage, art markets, the economics of food and wine, Indigenous art and culture, performing arts, valuing the arts….and more!

rmit1

A Call for Papers is currently open until 31 January 2018

ACEI2018 aims to provide a forum for discussion on a range of issues impacting the arts and culture and for the first time the conference will also address issues related to sport. The conference brings together a range of academics from a number of disciplines that share an interest in empirically motivated research on topics related to the arts and culture such as creative industries, creative cities, art markets and artistic labour to name a few. The conference also welcomes the insights and contributions from professionals, arts practitioners, policy makers and arts administrators in developing a fruitful dialogue that connects theory with practice.

With the conference host, RMIT based in the Melbourne CBD, the location of ACEI2018 combined with the social and cultural programme that accompany the conference, will provide delegates an ideal opportunity to experience Australian culture and explore the city of Melbourne.

Website: http://sites.rmit.edu.au/acei2018/

Conference presentation (PDF, 1.49 Mb)


Interview with Brendan Coates

brendan

 

Hey Brendan! Introduce yourself please.

Hey everybody, I’m Brendan and at my day job I’m the AudioVisual Digitization Technician at the University of California, Santa Barbara. I run three labs here in the Performing Arts Department of Special Research Collections where we basically take care of all the AV migration, preservation, and access requests for the department and occasionally the wider library. I’m a UMSI alum, I got a music production degree there too, so working with AV materials in a library setting is really what I’m all about.

And, I get to work on lots of cool stuff here too. We’re probably most famous for our cylinder program, we have the largest collection of “wax” cylinders outside of the Library of Congress at roughly 17,000 items, some 12,000 of which you can listen to online. I’m particularly fond of all the Cuban recordings from the Lynn Andersen Collection that we recently put up. We’re also doing a pilot project with the Packard Humanities Institute to digitize all of our disc holdings, almost half a million commercial recordings on 78rpm, over the next 5 years.

And we’re building out our video program, too. We can do most of the major cartridge formats. We’re only doing 1:1 digitization though, a lot of my work these days is figuring out how to speed up the back-end – we have 5000 or so videotapes at the moment but I know that number is only going to go up.

Outside of work, Morgan Morel (of BAVC) and I have a thing called Future Days where we’re trying to expand our skills while working with smaller institutions. Last year we made a neat tool called QCT-Parse, which runs through a QCTools Report and tells you, for example, how many frames have a luma bit-value above 235 or below 16 (for 8-bit video), outside of the broadcast range. You can make your own rules, too. We had envisioned it as like MediaConch for your QCTools reports and sorta got there… we’re both excited to be involved with SignalServer, though, which will actually get there (and much, much further).

Today, though, I’m going to be talking about work I did with one of our clients, revising their automated ingest workflow.

What does your media ingest process look like? Does your media ingest process include any tests (manual or automated) on the incoming content? If so, what are the goals of those tests?

Videos come in as raw captures off an XDCAM, each individual video is almost 10mins long, they’re concatenated into 30min segments, the segments are linked to an accession/ interview number. They chose this route to maintain consistency with their tape-based workflow. This organization has been active since the 90’s, so they’re digitizing and bringing in new material simultaneously, I was only working on the new stuff, but it made it organizationally easier for them to keep that consistency.

After raw captures are concatenated, we make flv, mpeg, and mp4 derivatives and they’re hashed and sent to a combination of spinning discs and LTO, all of their info lives in a PBCore FileMaker database. Derivatives are then sent out to teams of transcribers/ indexers/ editors to make features and send to their partners.

When I started this project, there was no in-house conformance checking to speak of. Their previous automated workflow used Java to control Compressor for the transcodes and, whatever else might be said about that setup, they were satisfied with the consistency of the results.

Looking back on it now, I ~should~ have used MediaConch right at the start to generate format policies from that process and then evaluated my new scripts/ outputs against them, sort of a “test driven development” approach.

Where do you use MediaConch? Do you use MediaConch primarily for file validation, for local policy checking, for in-house quality control, for quality testing for vendor files?

We use MediaConch in two places: first, on the raw XDCAM captures to make sure that they’re appropriate inputs to the ingest script (the ol’ “garbage in, garbage out”); and second, on the outputs, just to make sure the script is functioning correctly. Anything that doesn’t pass gets the dreaded human intervention.

At what point in the archival process do you use MediaConch?

Pre-ingest, we don’t want to ingest stuff that wasn’t made correctly, which you’ll find out more about later.

I think that this area is one where the MediaConch/ MediaInfo/ QCTools/ SignalServer apparatus can help AV people in archives to contextualize their work and make it more visible. These tools really shine a light on our practice and, where possible, we should use them to advocate for resources. Lots of people either think that a video comes off the tape and is done or that it’s only through some kind of incantation and luck that the miracle of digitization is achieved.

Which, you know, tape is magical, computers are basically rocks that think and that is kind of a miracle. But, to the extent that we can open the black box of our work, we should be doing that. We need to set those expectations for others that a lot of stuff has to happen to a video file before it’s ready for its forever home, similar to regular archival processing, and that that work needs support. We’re not just trying to get some software running, we’re implementing policy.

Do you use MediaConch for MKV/FFV1/LPCM video files, for other video files, for non-video files, or something else?

Each filetype has its own policy, 5 in total. The XDCAM and preservation masters are both mpeg2video, 1080i, dual-mono raw pcm audio, NDF timecode. Each derivative has its own policy as well, derived from files from the previous generation of processing.

Why do you think file validation is important?

Because it’ll save you a lot of heartache.

So, rather than start this project with MediaConch, I just ran ffprobe on some files from the older generation of processing and used that to make the ffmpeg strings for the new files. As a team, we then reviewed the test outputs manually and moved ahead.

The problems with that are 1) ffprobe doesn’t tell you as much as MediaConch/ MediaInfo does (they tell you crucially different stuff), and 2) manual testing only works if you know what to look for, which, because we were implementing something new, we didn’t know what to look for.

It turns out that the ffmpeg concat demuxer messes with the [language] Default and Alternate Group tags of audio streams. Those tags control the default behavior of decoders and how they handle the various audio streams in a file, showing/ hiding them or allowing users to choose between them.

What that bug did in practice was hide the second mono audio stream from my client’s NLE. For a while, nobody thought anything of it (I didn’t even have a version of their NLE that I could test on), so we processed files incorrectly for like three months. The streams were still the preservation masters (WHEW) but, at best, they could only be listened to individually in VLC. If you want to know more about that issue you can check out my bug report.

If we had used MediaConch from the beginning, we would have caught it right away. Instead, a years worth of videos had to be re-done, over 500 hours of raw footage in total, over 4000 individual files.

It’s important to verify that the things that you think you have are the things that you actually have. If you don’t build in ways to check that throughout your process, it will get messy and it’s extremely costly and time consuming to fix.

Anything else you’d like to add?

I’m really digging this series and learning how other organizations are grappling with this stuff. It’s a rich time to be working on the QC side of things and I’m just excited to see what new tools and skills and people get involved with it.


New Europeana Pro: the Beta version is out for your input

In September 2017, the Europeana Pro website went throught a major redesign, also integrating Europeana Labs and Europeana Research, which were previously living on separate websites. One of the main reasons for this redesign was to place people first: the new site approach is to be more ‘people’ oriented, to highlight Europeana’s close relationships with institutions and to reinforce the work done by all the Europeana ecosystem communities to make a difference in the digital cultural heritage world.

Europeana looks forward to users’ feedback.

Check the new website at https://pro.europeana.eu/post/new-europeana-pro-the-beta-version-is-out-for-your-input

The top-4 pages to start exploring the new site:

Europeana: we transform the world with culture.

 


BAMit! Buy, Sell and Discover Art

In July of this year Baxters International launched an exciting new app – BAM! a mobile and online market place for buying and selling art pieces from all over the world.  Conceived as a social enterprise, and born out of a desire to support and promote Global Art Entrepreneurs, the sole objective of BAM! is to bring artists to light enabling more and more of them to forge a sustainable career.

BAM! is a perfect platform for artistic individuals and students, both those already established in their field as well as those on the cusp of their careers; a tool to enable them to establish their brand, promote their works and grow and develop as creative innovators.

It’s free and it’s easy: in just a few simple steps artists can upload their work and start connecting with Art lovers around the world giving them access to a much wider audience and the potential opportunity to forge a sustainable career.

More info: https://www.baxters-art.com/

bam2


The National Gallery predicts the future with artificial intelligence

August 16 2017  

The National Gallery, London, is working in collaboration with museum analytics firm, Dexibit, to use big data for predictive analytics.

For decades, directors at the helms of the world’s cultural institutions have faced the challenge of balancing the historical and cultural objectives of telling curatorial stories with the economic needs of a museum dependent on a visiting public paying to visit temporary exhibitions and use its other commercial services. One of the most difficult challenges is the ability to accurately predict visitorship both to the museum, and to temporary exhibitions.

The National Gallery, which houses one of the greatest collections of paintings in the world and has more than 6 million visitors a year, is taking a new approach to tackle this problem, together with Dexibit. Using big data, Dexibit helps cultural institutions increase visitation, harness social outcomes and deliver efficiencies. With machine learning, the Gallery will explore how to move beyond simply analysing past visitor experiences in the museum, to employing innovative predictive analytics in forecasting future attendance and visitor engagement.

national_gallery_london_data

Chris Michaels, Digital Director, The National Gallery said:
“The National Gallery has put big data and analytics at the core of our digital strategy. We are delighted to be working with Dexibit to explore the potential of predictive analytics for better understanding on how we can serve our audiences. Machine learning and artificial intelligence have huge potential value for helping museums build better insight and develop new kinds of financial sustainability. We believe these new models can help us create better value for our visitors, and that the learnings we generate can help not only us but the wider sector. We look forward to working with Dexibit to unlock this exciting new area.”

Angie Judge, Chief Executive Officer, Dexibit said:
“Big data brings crucial innovation to the cultural sector at a time when the ground is shifting underneath museums and galleries. The National Gallery’s digital vision leads the way for the cultural sector – as museum analytics transition from retrospectively reporting the institutions’ own history, to using artificial intelligence in predicting our cultural future.”

With nearly 100 years of data with up to a thousand data points for every one of the millions of visitors the Gallery sees each year, this combination of art and science puts The National Gallery and Dexibit at the frontier of big data analytics.

ABOUT THE NATIONAL GALLERY

The National Gallery houses one of the greatest collections of paintings in the world. Located in London’s Trafalgar Square, the Gallery is free to visit and open 361 days a year. The National Gallery Collection comprises over 2,300 paintings in the Western European tradition from late medieval times to the early 20th century by artists including Botticelli, Leonardo, Titian, Rembrandt, Velázquez, Monet, and Van Gogh. The Gallery is also a world centre of excellence for the scientific study, art historical research, and care of paintings from this period. More at www.nationalgallery.org.uk.

ABOUT DEXIBIT

Dexibit is the global market leader for museum analytics. Dexibit’s software as a service includes personalised dashboards, automated reporting and intelligent insights specifically designed for cultural institutions. More at www.dexibit.com.


DI4R 2017 – connecting the building blocks for Open Science

Once again this year, EUDAT is co-organising the Digital Infrastructures for Research (DI4R) event together with RDA Europe, PRACE, EGI, OpenAIRE and GÉANT. The event takes place in the heart of the European Union in Brussels (Belgium) on 30th November and 1st December 2017, hosted at the stunning Square in the city centre and co-located with the first EOSCpilot Stakeholder event (28-29 November 2017).

DI4R2017

What’s new?

This year the conference will revolve around the theme “Connecting the building blocks for Open Science” with the overarching goal of demonstrating how open science, higher education and innovators can benefit from these building blocks, and ultimately to advance integration and cooperation between initiatives.

This is the reason why EUDAT encourages all researchers, developers and service providers to have their say in the conference by submitting an abstract for a 5-minute lightning talk, a 15 minute presentation, an interactive session (90 mins), a poster or a demo (see call for abstracts). The call is now open and closes on 13th October 2017 (click here to submit)!

Registration to the conference is also open: make sure you register by 31st October to benefit from the early-bird rate!

Website: https://www.digitalinfrastructures.eu/


a new EU project to “ROCK” historic city centres

In May 2017 a new EC-funded project in H2020, named ROCK, has been launched by coordinator Municipality of Bologna. Supported by a large international consortium of Universities, Municipalities, Development and Consulting Groups, Dissemination Networks, SMEs and developers, and Industry Driven Associations, ROCK aims to support the transformation of historic city centres afflicted by physical decay, social conflicts and poor life quality into Creative and Sustainable Districts through shared generation of new sustainable environmentalsocialeconomic processes.

rock head

ROCK – Regeneration and Optimization of Cultural heritage in creative and Knowledge cities – focuses on historic city centres as extraordinary laboratories to demonstrate how Cultural Heritage can be a unique and powerful engine of regeneration, sustainable development and economic growth for the whole city. Scope of the project is to develop an innovative, collaborative and systemic approach for promoting an effective regeneration and adaptive reuse in historic city centres.

ROCK will therefore implement a repertoire of successful heritage-led regeneration initiatives related to 7 Role Model selected cities: Athens, Cluj-Napoca, Eindhoven, Liverpool, Lyon, Turin and Vilnius. The replicability and effectiveness of the approach and of the related models in addressing the specific needs of historic city centres and in integrating site management plans with associated financing mechanisms will be tested in 3 Replicator Cities: Bologna, Lisbon and Skopje.

rock cities

Three are the drivers of ROCK’s actions:

  • Organizational and technological innovation at local level, to boost city spaces by improving safety, mitigating social conflicts, attracting visitors and tourists
  • Social innovation and educational programmes to bridge generational gaps of the citizens and to value and empower the elderly population
  • Innovative training solutions including incubation actions, workshops and events to stimulate business creation

More information about ROCK: https://www.rockproject.eu/

pic from the Bologna kick-off meeting

pic from the Bologna kick-off meeting


TECHNOLOGY for ALL Forum, 4th edition

t4a
The fourth edition of the TECHNOLOGYforALL Forum will be held in Rome from 17 to 19 October 2017.

Italy’s role in the development and conservation of the world Heritage is a framework where we will try to analyse the weighted contribution of the technologies, that overcome the impact of the first innovative enthusiasm, and can actually be admitted to a cycle of production normed with shared standards for sustainable socio-economic development in which the intelligent innovation play a key role for the Territory, the Cultural Heritage and the Cities.

The program will emphasize as much as possible, the emerging content of the Forum in the use within the international context and the operations of Italian companies in the sectors where Italy plays a testimonial role in the world. The aim is not only the integration and interactivity impact of the technology, but it is also the sustainable socio-economic contribution in the production cycle to the final destination.

The day before the Conference, inside a Roma’s archaelogical area, a Workshop on the field will be organized, where the manufacturers of instruments and service providers will be dynamically involved in the acquisition of data with advanced solutions from the production phase to the publication of mega and metadata.

In parallel with the conference are organized some events for training on the development, structuring and organization of information and on the web and on mobile vertical applications. The production processes that will be described affect a broad range of users, ranging from government agencies to private companies or professional researchers or finally from students to citizens.

The Conference aims to collect experiences with interventions on the results of the workshop on the field, giving the possibility for participants to retrace the process of acquisition and processing, enriched with the expertise and the presentations of keynote experts, best practices, achievements and projects.

Three days based on information and training, socialization and sharing, discussion and debate.

Website: https://www.technologyforall.it/en/


Interview with Ben Turkus and Genevieve Havemeyer-King of NYPL

bengen

 

Hey Ben, hey Gen! Introduce yourselves please.

(BT) Hi Ashley! I’m Ben Turkus; a long-time fan of MediaInfo/MediaConch, first-time interviewee. I’m the Assistant Manager of Audio and Moving Image Preservation at New York Public Library, and previously, I worked at the Bay Area Video Coalition in San Francisco, on projects similar to MediaConch (shoutout to QCTools). Rewinding even further, in what feels like a lifetime ago, I had a semi-illustrious career in the restaurant business (check out this other lame/hilarious/embarrassing interview if you dare).

With the support of the Andrew W. Mellon Foundation, NYPL is currently engaged in a major audiovisual digitization effort. Not only have we identified 230,000+ media objects to be “high value, high risk,” and worthy of preservation, but we’ve begun working hard to actually reformat as many as possible, through a combination of in-house and outsourced digitization efforts. To date, we’re about 75% of the way through an initial 60,000 project set for 2016-2017. In many ways, it’s because of tools like MediaConch that we’ve been able to increase production without sacrificing quality.

When you’re working with numbers like these, it can be easy lose sight of the content that you’re striving to save and make available to the public, but I will say this: NYPL’s collections are unbelievably rich and varied, and almost every single day we discover something incredible. It doesn’t feel right to pinpoint specific collections, but just yesterday Gen and I had a fight over who would get to qc some 2-inch, 24 track reel-to-reel audio recordings from Arthur Russell. It got bloody.

(GHK) Yeah, right. Hi, I’m Genevieve Havemeyer-King. I’ve been the Media Preservation Assistant for the NYPL’s Preservation of Audio and Moving Image Unit (PAMI) for about a year. As part of the initiative to preserve the Library’s at-risk audiovisual research collections, my primary role so far has been to assist with coordination of mass-digitization for magnetic and optical media. This includes (but is not limited to) refining and documenting our technical specifications, implementing a robust quality control workflow for our high volume of deliverables, tracking digitization progress, and maintaining inventory of our vendor shipments. As the first of several gatekeepers for NYPL’s media ingest system and digital repository, I also assist Ben with migrating and tracking assets on their journey towards long-term digital preservation.

What does your media ingest process look like? Does your media ingest process include any tests (manual or automated) on the incoming content? If so, what are the goals of those tests?

(GHK) I should start by mentioning that our unit primarily manages reformatting for audio and moving image (AMI) research collections; born-digital records, still images, and other collections are managed by our Archives Unit and Digital Imaging Unit colleagues (some of whom also use MediaConch!).

Before AMI deliverables reach NYPL’s Media Ingest system, they are reviewed and tested using a combination of custom scripts, proprietary and open-source tools, and manual content inspection. We check fixity, technical specification conformance, metadata validity, signal quality, and adherence to our preservation policies. Our suite of tools includes bagit.py, JSON schema, ajv, MediaConch, MediaInfo, QCTools, Wavelab, and human eyes and ears. There are a lot of goals in running these checks, but namely we strive to: * Ensure the integrity of our bits;
* Ensure that metadata created for physical and digital assets is accurate and captures their preservation history to a reasonable degree;
* Achieve consistency between in-house and vendor produced deliverables; and
* Catch and communicate about errors as soon as possible, before they are out of our hands and on their way to our repository.

If they pass this review process, they are migrated to another pre-ingest staging area, where they undergo a similar series of automated tests to ensure they are safe to ingest.

Our metadata schema, specifications, and many of our customized tools are available on GitHub: ami-metadata, ami-specifications, and ami-tools.

Where do you use MediaConch? Do you use MediaConch primarily for file validation, for local policy checking, for in-house quality control, for quality testing for vendor files?

(GHK) MediaConch is a pretty integral part of our Quality Control workflow, and we make use of it for all of the above tasks. We created our own MediaConch policies based on the built-in and public policies, but because our specifications require that preservation master files retain many of the same characteristics as their physical source objects, we sometimes need to adjust or create new policies that are appropriate to particular media types as we encounter them. This means that ‘Fail’ results act more as flags for closer investigation, and that some specifications change slightly as needed.

We use the CLI for batch-checking entire shipments of deliverables, directly on the storage media on which they arrive (which can run anywhere from 400 to 6000 files, approximately 6TB per shipment, depending on the type of media on a given hard drive). Operating on a write-protected drive, we use multiple ‘find’ commands to simultaneously identify specific media types, apply a specific policy, and output the report as a .csv. We use the GUI for one-at-a-time testing of sample files and pilot projects, as well as investigating specific errors when an asset fails. We hope to create more complex tools that will integrate our metadata files (JSON) to inform exactly which policy is used for each file, for a more automated, streamlined system.

At what point in the archival process do you use MediaConch?

(GHK) We use it exclusively in our pre-Ingest quality control workflow, which we carry out as soon as we receive a shipment from a vendor, and also on in-house deliverables before they’re migrated to our server, where they are then staged for Ingest.

(BT) We also to use MediaConch’s “system” policies, and other organizations’ public policies, as targets/inspiration when drafting or refining our own technical specifications. In this way, MediaConch is there from the very beginning (or, in some cases, maybe it should have been). As with QCTools, there’s this educational side to MediaConch that, for me, is absolutely essential. On numerous occasions, we’ve used MediaConch to work backwards, referring to either the FFV1/MKV implementation checker, or the “Matroska is well described” or “TN2162 compliant?” system policies to gain a better understanding of the files that we’re creating/having created for us.

I could go on and on about this, but in short: MediaConch has helped us right some of the self-descriptive wrongs that are presented by various capture hardware/software configurations. By clueing us into the ways that requesting or creating “uncompressed video in Quicktime” is not really sufficient, MediaConch has pushed us to rectify issues either during a transcode, or by learning to use cool tools like MKVToolnix.

Do you use MediaConch for MKV/FFV1/LPCM video files, for other video files, for non-video files, or something else?

(GHK) We use it for both video and audio. We began by checking our Quicktime-wrapped, 10-Bit Uncompressed video preservation master files, and have now switched to MKV/FFV1/LPCM, and we also use it to check our MPEG-4 service copies. For audio, we use it to check our Broadcast Wave preservation masters and edit masters.

(BT) Recently, we’ve also been identifying and flagging outlier formats that may need more in-depth analysis, such as early digital audio formats that were recorded on videotape, digital audio in general, HDV, and DV-family video. These formats may not have consistent characteristics that are easily checked against a standard “policy”, so we’re still exploring how best to approach them at-scale.

Why do you think file validation is important?

(BT) There’s a baseline of quality and conformance that we’d like to adhere to. Beyond that, some nuanced aspects of validation require some fluidity; there is no consensus on whether certain specifications are essential. We try to ensure that files are as self-descriptive as possible, with the understanding that preservation is a process that involves many stakeholders, and that fluctuating resources and capabilities, as well as the complex nature of audiovisual media, impact our ability to ensure and enforce certain requirements.

Anything else you’d like to add?

(GHK) As a practical tool, MediaConch has made a big difference in our ability to manage large-scale digitization. We’ve caught many errors that may have been overlooked if not for the ability to check a high-volume of media all at once. Some things we’ve caught include random 8-bit preservation master files in the mix, inconsistency in audio channel configuration among access copies, and files for which exceptions to our specs had to be made but weren’t communicated right away. By catching these early, and using the accompanying metadata to help diagnose where they may have originated, we’ve been able to identify and prevent several issues from being replicated throughout an entire preservation project.

Using it has also continuously underlined how complex and diverse audiovisual formats are, and how a nuanced approach to preservation can sometimes lead to a rabbit hole of requirements and specifications. So, it’s helped us rethink some of our own processes – how we can keep simplifying things to balance our requirements with our scale of production – and has inspired more workflow development for ideal automated QC processes. The tool, its developers, and its user base continue to provoke much-needed dialogue about format and codec standardization in the preservation community, which our whole field really benefits from.

(BT) I think we’ve found that engaging in the practice of conformance checking, and participating in the development of tools to support this practice, can influence wider discussion about language, standardization, and compliance for all kinds of pre-existing “standards.” Again, returning to the “uncompressed video in Quicktime” question, if a vendor is incapable of creating files that adhere to the parameters set forth in Apple’s Technical Notes (TN2162), and if that vendor can’t capture directly to FFV1/MKV, we’re presented with an interesting challenge. How can we ensure that vendors are creating the “right” kind of Quicktime files for transcoding to MKV/FFV1; what are the risks and compromises involved in this approach?

There is a large amount of trust required in our vendors, that they are reformatting at the specifications appropriate to particular formats (i.e. audio sampling rate); but while MediaConch will ensure that a file may pass our specifications, it cannot ensure that the specification for a given format is what was appropriate for that particular object. For us, this makes MediaConch an excellent tool for supplementing manual quality control with automated systems to speed up what can often be a slow process.


A focus on heritage and creativity with an innovative method: Executive Master Cultural Heritage – Florence 2018

masterUniversità Cattolica del Sacro Cuore – Milano and Opera di Santa Maria del Fiore in Florence are glad to present the Executive Master in Cultural Heritage. Creativity, Innovation and Management: a one-year, full-time program, taught entirely in English, that attracts candidates from all over the world.

The Master is targeted at graduates and young professionals eager to take advantage of the global changes related to creative industries and tourism.

Thanks to Italian excellence in cultural heritage and entrepreneurial talent, the Master offers a solid foundation in management – with specific attention to the historical-artistic and tourism sectors, supporting innovation, creativity, and business development.

Location Opera di Santa Maria del Fiore, Studium Florentinum, Florence, Italy

Duration January – December 2018

Format Full time

Degree 70 ECTS

Language English

Classes Taught lessons with workshops, group work and case studies

Internship Strategic practical work experience with some of the region’s leading business companies

Download the Brochure for more information (PDF, 775 Kb)