Archive for the ‘opendata’ category

Open Government vs Government 2.0

Australia and New Zealand have a proud history of calling the same thing different names, for no reason other than etymological coincidence. Duvet vs doona, thongs vs jandals, togs vs cossies. These differences are defended fiercely, in a kind of friendly rivalry.

It’s the same with open government. In 2009 the Australian Government 2.0 Taskforce developed a significant report that went on to inform government policy. The term stuck, and the open government communities in Australia are called Gov2QLD, Gov2NSW, Gov2ACT.

In NZ from 2008 we had open government barcamps, then Open NZ was formed. In 2011 the Declaration on Open and Transparent Government was passed. We’ve settled on ‘open government’ or in abbreviated form ‘opengovt’.

Despite these differences, the formation of open government policy in both countries, and the development of related communities of practice, has involved a lot of trans-Tasman exchange of ideas. Through visits to NZ by people like Senator Kate Lundy, Pia Waugh, and Nick Gruen, collaborative standards bodies like the Australia New Zealand Land Information Council (ANZLIC), and participation in conferences in Australia by our government officials, open government is a journey ourselves and our cousins across the ditch are travelling together.

The paths we take won’t be exactly the same. There are many differences, Australia has a state and federal system and two houses of parliament, NZ just has central and local government. Fundamentally though, we both come from the Westminster system, have cultures founded on egalitarian values, and share much in common in our economies and place in the Pacific.

In that spirit, I’m off to Australia this week. After my long stint head down at the Canterbury Earthquake Recovery Authority, it’s time to renew and strengthen ties. I’ll be speaking at a range of events, and hoping to learn lots from Australian progress in open data, shared services, geospatial data infrastructure, and participative engagement.

Among other things I’ll be speaking at:

I’ll blog what I learn as I go.

Christchurch 2.0

To build Christchurch 2.0, the legacy systems of the past, on computers, in organisations, and in people’s brains will not be adequate for the task. We have to upgrade.

At the TEDxEQChch event today, 3 months after the devastating earthquake in Christhchurch, Bjarke Ingels, an architect from Denmark, sent us a message encouraging us to build ‘Christchurch 2.0′. His architecture is based on thinking in new and very different ways.

Since the week before Easter I’ve been working with the Canterbury Earthquake Recovery Authority (CERA), setting up online communications and engagement processes. I had the privilege of going into the EOC in the Art Gallery and talking to the web and geospatial information systems (GIS) staff involved in providing information on the web and social media during the emergency response.

I learned that before the earthquake, it was very difficult for staff in local government to use social media, cloud computing and open data. To do so would have required a large and detailed policy, and no one initiative could afford to develop that policy. After the earthquake staff piled into the Art Gallery, set up temporary desks and laptops, and got the response underway.  They used whatever worked, and had permission to experiment, make mistakes and continuously improve. They found that social media let them gauge the social mood in real time, and identify where people were frustrated, upset or confused due to lack of information. They answered questions directly, and fed information through to the media and web teams to plug the gaps. The GIS staff set up open data APIs to get water and sewage network data to the construction contractors in real time.

The earthquake was terrible. 30-40% of the buildings in our central city have been destroyed. I’ve said goodbye to six sets of friends in the last three weeks who’ve had to leave the city due to lost homes, jobs and businesses. One of my accountants died, and his colleagues were trapped under desks in the dark, in a collapsed building for several hours. There are many people still doing it hard, and in real need.

The silver lining is that much of the inertia that prevented organisations from using new and innovative approaches, has been swept away. We’ve seen that new ways of thinking work, and they work better than legacy thinking of the past. The CCC have understood this, and will be using a web based planning and consultation tool for drafting and seeking feedback on the Central City Recovery Plan.

We have an incredible opportunity before us. People have learned that it can be safe to do things differently. They have experiential evidence that doing things different works. As a friend said “bullshit has gone out the window”. The pressure on us, and the speed at which we must get things done means legacy methods just won’t be sufficient.  We have an opportunity to use the latest thinking, our own, and from overseas. Just like Japan used the leading thinking of Edwards Demming to rebuild after World War II and create a dynamic economy, we can use the leading edge approaches today to build a city of tomorrow.

We have the opportunity to build Christchurch 2.0

To do so we’re going to have to upgrade our thinking.

Data-intensive science

This post is a part of a project I’m running called the the Framework for eResearch Adoption. The original post is here. There’ll be a series of posts on aspects of eResearch including data reuse, computationally intensive research, and virtual research collaborations.

The reuse and management of research data is becoming increasingly important. Data-intensive science represents a transition from traditional hypothesis and experimentation, to identifying patterns, and undertaking modelling and simulation using increasingly massive volumes of data collected by thousands of researchers the world over. This means more breakthroughs across research discipline boundaries, and more bang for the research buck.

Because of this, data reuse is rapidly becoming a focus of policy and funding agencies, internationally, and in New Zealand. Open data is now government policy1. Managing research data well is also becoming essential in ensuring the integrity, transparency and robustness of research, so it can be defended against criticism and attack.

This article explores the trends in research data reuse and management. Further posts will look at the current policy context in New Zealand, the future requirements for institutional  data management, the risks of doing it poorly, and the benefits of doing it well.

What are the implications of these trends for eResearch and data reuse and management in New Zealand? What are the differences between Crown Research Institute’s (CRIs) and Universities in this regard? What do we need in place institutionally and nationally to support improved data management, and uptake of data-intensive scientific methods?

Please comment at the end of this article, or email julian.carver@seradigm.co.nz to give feedback.

Global Trends in Research Data Reuse and Management

The most important global trend impacting on research data management is the emergence of data-intensive science, or the ‘Fourth Paradigm2’. This involves:

  • The transition from science being about 1) empirical observation of the natural world, to 2) theoretically based with hypotheses and experimentation, to 3) computational using modelling and simulation, and now to 4) data-intensive ‘eScience’.
  • Collecting more and more data through automated systems including sensor networks, large and small instruments, DNA sequences, satellite imagery. This means data is not just collected in a bare minimum sense specifically for each research project, instead oceans of data are becoming available for use by many different researchers.
  • The huge increase in data volumes enables researchers to sift through the data, identify patterns, draw connections, develop and test hypotheses, and run experiments ‘in-virtuo’ using simulations.
  • This unification through information systems of the processes of observation, hypothesis, experimentation and simulation means science can tackle bigger problems, on larger scales, and involving greater numbers of researchers across the globe.

The transition to data-intensive science includes a number of specific trends in the research sector. These are as follows.

Data collection and aggregation:

  • The proliferation of automated data capture and collection technologies in almost every field of research (e.g. fMRI in neuroscience, EM soil mapping, bathymetry, and satellite imagery such as NZ’s Land Cover Database)
  • Increased use of national level and global level discipline specific data repositories (e.g. Genbank, GBIF, GEOSS3, NZ Social Science Data Service4)
  • Different research disciplines adopting data sharing and reuse, and the fourth paradigm in general, at different paces, often driven by the existence, or not, of very large scale infrastructure (e.g. Hubble Space Telescope, Large Hadron Collider) that by default stores data centrally, and driven also by the emergence of professional norms around central deposit of data on publication (e.g. Genbank).

Changes in the scale of simulation:

  • The nature of models is changing from representing small local systems to much broader spatial/temporal scopes, and simulations are being used to understand larger scale phenomena and make predictions (e.g. global climate simulations, detailed simulations of the human heart).
  • Models are becoming larger than any one researcher can program themselves, and are large collaborative efforts
  • Desktop computers are no longer sufficient to run models, and simulations require high performance computers and clusters, and require and generate massive amounts of data which needs to be moved between research institutions across the globe

Changes in the nature of and demand for verification/defensibility:

  • The grand challenges facing humanity require science to be done at large scales, and to challenge current consumption and behaviour patterns. This increasingly generates tension, and the scientific process is coming under much greater levels of scrutiny. Data on which conclusions are based, and the methods used to produce those results have to be available for independent verification
  • Scientific workflow technologies are emerging to automate and allow replication of data aggregation, analysis, interpretation and results again opening research and the data underpinning that research to greater examination.

Discovery and access:

  • Discovery of relevant data is becoming an issue as the number of data sets, and the volume of data grows. Metadata catalogues and federated data search engines are becoming essential, as are data preservation and curation activities.
  • Researchers are starting to require the ability to trawl and do automated comparisons of datasets to see if they’re like theirs, and then be able to drill down, look at the attributes they measured, disambiguate terms, and determine to what extent the datasets are comparable.
  • Ways of structuring data to support discovery, access and comparison are being rapidly developed and adopted, including Linked Data, structured ontologies, and the semantic web.

Increased collaboration (nationally, internationally, and cross-disciplinary):

  • Increased specialisation in research expertise, and in support functions such as informatics & data management, means bigger research teams are necessary and in turn require more collaboration/coordination and processes to allow data sharing and reuse.
  • Methods such as remote diagnostics are being developed, where data will move between someone who needs to know an answer, and a specialist in another part of the world (e.g. high resolution video imaging in real time through a stereo microscope at a shipping port to a biosystematics expert in another country).
  • Increased engagement of ‘citizen scientists’, doing some of the work of data gathering, and crowdsourcing of data analysis (e.g. species observation networks, Gold Corp opening its prospecting data and offering a reward to geologists, prospectors and academics worldwide who helped them locate deposits)

Shifts in publication processes:

  • In some fields the number of publications from reuse of data is starting to outstrip the number based on primary collection of the data
  • Publishers are requiring datasets supporting the research to be lodged at the time of publication, given unique identifiers, and in some cases made available for review before papers are published.
  • Methods such as dataset citation, scientific workflows emerging to cope with the need to manage complex data attribution chains

These global trends in science are also driven by technology trends, including:

  • User expectations about search, discovery, visualisation, and collaboration tools are being set by global scale consumer level providers such as Google, Amazon, Facebook who are funded commercially (e.g. Google’s annual R&D budget is NZD $2B, about the same as NZ’s entire science system spend).
  • Cloud computing is emerging as a way to significantly reduce cost, massively increase scalability of, and increase access to commodity computing and data storage infrastructure.
  • Mobile devices and their accompanying sensors, data storage and display technologies are being rapidly advanced by the consumer market (e.g. digital cameras, iPhones).
  • Software is increasingly shifting to the web as a delivery vehicle/user interface, software as a service is becoming more pervasive, and in many areas open source has become the dominant mode of software production

National Trends

Research data management is also impacted by national level trends, in other countries and in New Zealand specifically. These include:

  • The establishment of national centres to provide expert advice and services on data preservation, collection, curation and reuse (e.g. the Australian National Data Service, the UK Digital Curation Centre)
  • Increased coordination and sharing of data management infrastructure and tools across research institutions
  • The emerging requirement from research funding agencies for data management planning to be included in funding bids (e.g. US National Science Foundation announced this in May 20105, the UK Natural Environment Research Council has this as a requirement)
  • The rapid development of the ‘Open and Transparent Government’ movement in the last two years meaning elevated expectations about data access from the public and politicians, and more public money being put into data infrastructure (e.g. the US Open Government Initiative)
  • Open access licencing frameworks being adopted by individual countries, often based on Creative Commons and/or open source licences (e.g. the UK Open Government Licence, the New Zealand Government Open Access and Licensing (NZGOAL)6 framework)
  • Increasing use of open data in public consultation processes (e.g. the recent NZ National Environmental Standard for Plantation Forestry in New Zealand7, used an online discussion forum and provided access to relevant government datasets)
  • The establishment of an ‘open data’ community outside of government and research organisations, who have the skills and desire to take publicly funded data and develop value added tools and services (e.g. GreenVictoria8, a service aimed at increasing public awareness of climate change, using water consumption data and other Australian Government datasets; SwimWhere9, an NZ mashup and iPhone app using water quality data)

In New Zealand the government has strongly signalled a move towards coordination and sharing of ICT systems, resources and data across the public sector. This is expressed in the recently released ‘Directions and Priorities for Government ICT’1. This mandates the use of shared services where they are available. It also has a particular focus on open data, covered in Direction 2 ‘Support open and transparent government’ which includes the following priority:

“Support the public, communities and business to contribute to policy development and performance improvement”

It is accompanied by the following statements:

“Open and active release of government data will create opportunities for innovation, and encourage the public and government organisations to engage in joint efforts to improve service delivery.”

“Government data effectively belongs to the New Zealand public, and its release and re-use has the potential to:

  • allow greater participation in government policy development by offering insight and expert knowledge on released data (e.g. using geospatial data to analyse patterns of crime in communities)
  • enable educational, research, and scientific communities to build on existing data to gain knowledge and expertise and use it for new purposes”

Government agencies and research organisations in New Zealand are being encouraged to use NZGOAL rather than ‘all rights reserved’ copyright licences. There is an expectation from government that publicly funded research data be made openly available unless there are very good reasons not to (e.g. public safety, privacy, commercial sensitivity, exclusive use until after publication).

Archives New Zealand is currently planning a Government Digital Archive. This will enable Archives New Zealand to take in large-scale transfers of government agency digital records, such as email messages, videos, databases and electronic documents. This may also be able to take in historical research datasets where organisations are not able to archive and publish these themselves. This project is being done in collaboration with the National Library of New Zealand and the existing infrastructure of the Library’s National Digital Heritage Archive (NDHA) will be leveraged to provide a full solution for digital public archives.

In the New Zealand research sector the National eScience Infrastructure (NeSI)10 business case has recently been approved by Cabinet. NeSI represents the most significant infrastructure investment for New Zealand’s Science System in the last twenty years. It will provide a nationally networked virtual high performance computing and data infrastructure facility distributed across NZ’s research institutions. NeSI is an initiative led by Canterbury University, Auckland University, NIWA and AgResearch , and is supported  by Otago University and Landcare Research. It will coordinate access to high performance computing facilities at these institutions, and the BeSTGRID eScience data fabric, research tools and applications, and community engagement.

What do you think?

So, what are the implications of these trends for eResearch and data reuse and management in New Zealand? What are the differences between Crown Research Institute’s (CRIs) and Universities in this regard? What do we need in place institutionally and nationally to support improved data management, and uptake of data-intensive scientific methods?

Please share your thoughts by commenting on this post, or by emailing julian.carver@seradigm.co.nz with your thoughts. Feedback will be incorporated into the Framework for eResearch Adoption project.

References

  1. Directions and Priorities for Government ICT http://www.dia.govt.nz/Directions-and-Priorities-for-Government-ICT
  2. Hey, Tony; Stewart Tansley and Kristin Tolle, Eds. “The Fourth Paradigm: Data-Intensive Scientific Discovery.” Microsoft Research. Redmond, Wash: 2009. PDF at http://research.microsoft.com/en-us/collaboration/fourthparadigm/default.aspx
  3. The Global Earth Observation System of Systems (GEOSS) Geoportal http://www.earthobservations.org/geoss.shtml
  4. http://www.nzssds.org.nz
  5. Scientists Seeking NSF Funding Will Soon Be Required to Submit Data Management Plans http://www.nsf.gov/news/news_summ.jsp?cntn_id=116928
  6. New Zealand Government Open Access and Licensing (NZGOAL) framework http://www.e.govt.nz/policy/nzgoal
  7. National Environmental Standard for Plantation Forestry in New Zealand http://www.mfe.govt.nz/laws/standards/forestry/index.html
  8. GreenVictoria http://www.ebudgetplanner.com/
  9. SwimWhere http://swimwhere.info/
  10. NeSI http://www.nesi.org.nz

The four noble truths of open data

In October this year Chris McDowall wrote a post called The Zen of Open Data 1. This got me thinking, somewhat quizzically, about the relationship between Zen thinking and ‘open’ thinking. In commenting on the post Chris and I came up with the somewhat tongue in cheek ‘Four Noble Truths of Open Data’:

  1. Life means suffering.
  2. The origin of suffering is email attachments in proprietary formats and data embedded in PDFs
  3. The cessation of suffering is attainable through open standards and APIs.
  4. Open data is the path to the cessation of suffering.

So, apart from a pun on the word ‘attachment’, what am I on about here? What does Zen thinking have in common with ‘open thinking’?

Firstly, my understanding of what Zen’s four noble truths2 mean, then a comparison with open data.

One way I’ve heard ‘life means suffering’ explained, is that ‘life is unsatisfactory’. Life, due to our own limitations, it is imperfect. It’s not that the universe is somehow imperfect, it’s that due to the way we perceive and process our interactions with the universe, our experience of it is unsatisfactory.

The origin of this suffering is our attachment to the things we desire. We desire material possessions, wealth, popularity, love, happiness. We crave for these things when we don’t have them, and cling to them when we do. We even cling to our own idea of self, to our own continued existence. These things are not inherently bad or wrong, but they are transient, impermanent. The inevitable loss of them, or even the fear of their possible loss, causes us to suffer.

But it’s OK. Because the cause of our suffering is internal, not something outside us that we can’t control, we can do something about it. All we have to do is to let go of these attachments. There is a way to do this, a set of actions and ways of thinking that take us out of suffering. Buddhism calls this the Eightfold Path3, and it comprises things like right intention, right speech, right action.

So what does all this have to do with open data? While I facetiously said that the origin of suffering is email attachments in proprietary formats, perhaps it is a valid comparison. What are proprietary formats if not an effort by the company that designed them to hold on to market share, to control their customers, to protect against loss of them as a source of revenue? When a public (or even private) sector organisation puts up arguments that it shouldn’t release data, what are they expressing? Things like it’s our intellectual property, people might misinterpret it, it might damage our reputation, are these not all simply fears of loss?

At a deeper level, is it that opening up data is to acknowledge that the boundary around an organisation is simply an idea, an arbitrary construct? Do people in those organisations feel, at some level, less safe the more diffuse that boundary becomes?

Buddhism teaches that it is not easy to let go of attachment. Our idea of self as an independent entity, and our desires for things, are deeply entrained. Is this also the case with organisations moving to open up their data?

If so, what is the path? I suggest that use of open standards, of licensing terms that permit reuse in some way, and describing your data in a catalogue, are a good way to begin the journey. Letting go, even in this small way, is the first step to acknowledging the possibility that we are all part of something bigger.

References:

  1. The Zen of Open Data – http://sciblogs.co.nz/seeing-data/2010/10/12/the-zen-of-open-data/
  2. The Four Noble Truths – http://en.wikipedia.org/wiki/Four_Noble_Truths, and http://www.rinpoche.com/fornob.html
  3. The Noble Eightfold Path – http://en.wikipedia.org/wiki/The_eightfold_path

Making sense of data management ‘landscapes’

There are some fantastic developments in visualising data. From tag clouds to infographics to heatmaps to geospatial mashups to sparklines, finding new ways to understand and present data is essential in extracting value from the ‘data deluge‘, and solving the small, medium and grand challenges of our time. This excites me enormously.

It is, however, the domain of people who are cleverer than I (or at least much more adept at programming and using databases and analytics tools).

One of my major areas of work is on understanding and improving the whole of sector ‘landscape’ of data/information management in the environment sector. I’ve worked for the last eight years on strategy, policy and projects to help connect the many different datasets and system in this domain. This is in order to enable better access to knowledge generated from research, and better decision making and improved environmental management (including biodiversity, biosecurity, water and climate) by government agencies such as MfE, DOC, MAF Biosecurity, ERMA and the AHB, by local government agencies, and by NGOs and community groups.

By sharing data, and providing ‘middleware’ (such as the NZ Organisms Register) to connect data across different agencies, people have increased opportunity to develop and/or use tools to enhance the quality of the decisions they’re making, and the cost effectiveness of the limited resources we have for environmental management.

I’ve had a particular focus on information systems for biodiversity (the conservation of native species and ecosystems), but am now doing more work relating to information systems supporting biosecurity (preventing pest incursions and eradicating/managing existing pests).

Recently the Terrestrial and Freshwater Biodiversity Information Systems Programme (TFBIS) asked me to help determine where the gaps were in the biodiversity information systems ‘landscape’. I had written the TFBIS strategy in 2006/2007, and since then a number of systems have been developed to provide access to and connect existing datasets. The strategy helped give direction to approval of funding grants for such systems, but didn’t give a way of monitoring the progressive development of an interconnected and federated ‘meta-system’ for biodiversity management, or to understand which major pieces of ‘middleware’ needed to be developed next.

So, I made a biodiversity data landscape diagram. This shows the primary datasets, the sources of aggregated primary data, national middleware, web services, models & data transformation tools, interpretive tools, and user interfaces (for discovery, access, and data entry).

The diagram is a work in progress, and is very likely missing some items. If you know of anything that should be there but isn’t, please let me know. At the moment it’s a PDF, with many of the items linked to their web sites. In the future I’d like to create a more interactive version, hooked to a proper metadata repository.

It’d also be neat to see this approach used for other things like biosecurity, water, climate etc.

Biodiversity Data Landscape Diagram

See the key to the diagram for descriptions of the types of items and definitions for each of the ‘levels’.

The data deluge

Next week I’m facilitating the ‘Research Data Matters‘ workshop for The Ministry of Research, Science and Technology, National Library of New Zealand and the Royal Society of New Zealand. This is a one-day event to discuss issues surrounding the long-term management of publicly-funded research data.

I’ve been working on research data policy issues with MoRST for about seven years now and its exciting to see how far we’ve come in that time. One of my oft collaborators at MoRST last week asked me whether I’d seen any infographics that represented the ‘data deluge’, in particular the figures cited in the article by that name from the Joint Information Systems Committee (JISC) in the UK.

I’ve seen some excellent ones on the size of the Internet, and file storage volumes, but nothing of that nature, so I decided to make one. This uses physical objects to show the relative scale of moving from a megabyte up to an exabyte. Click the image for a larger version:

data deluge infographic

Apparently the current size of the Internet is estimated at 5 trillion terabytes, or 5 exabytes. I note the JISC article is from late 2004, so estimates on the total annual production of information may well have gone up by then.

For those particularly interested the actual sizes, they’re not precisely scaled by 1,000 each time, but are fairly close. Here are the numbers:

Length of a tiny ant 1.4 millimetres
Height of a short person 1.4 metres
Length of the Auckland Harbor Bridge 1,020 metres
Length of New Zealand 1,600 kilometres
Diameter of the Sun 1,390,000 km

This infographic is licensed by Julian Carver under the Creative Commons Attribution-ShareAlike 3.0 New Zealand License.

What is the (e) in your eResearch?

First eMail, then eCommerce, eBusiness & eProcurement, eGovernment, eDating, and now eResearch. Does simply putting an ‘e’ in front of an existing practice make it somehow sexier, and more now? I headed along to the Wellington eResearch Symposium last week to find out.

OK, that’s not true. I did go to the Wellington eResearch Symposium last week, but I already have deeply held views about eResearch and have been advocating the concept for six or seven years. I’m just pretending to be a journalist today, and that sounded like something a journalist would say.

To read the rest of my write up of the event, visit my guest post about it on Sciblogs, the site that brings together the best science bloggers in the NZ on one website.

The texture, sound and smell of the digital world – a tribute to @littlehigh

In season 1, episode 8, of Buffy the Vampire Slayer “I, Robot, You Jane”, Giles, the librarian comments to Jenny Calendar, the computer science teacher that what he doesn’t like about computers is the smell.

“What do you mean, computers don’t smell”

she says. Giles replies

“Smell is the most powerful trigger to the memory there is. A certain flower or a whiff of smoke can bring up experiences long forgotten. Books smell… musty and rich. The knowledge gained from a computer is… it has no texture, no context. It’s there and then it’s gone. If it’s to last, then the getting of knowledge should be tangible. It should be, um… smelly.”

I first met Paul Reynolds of McGovern Online (or @littlehigh as he became known on Flickr, Twitter and other social networks), at the National Digital Forum conference in 2007. The NDF is a “a coalition of museums, archives, art galleries, libraries and government departments working together to enhance electronic access to New Zealand’s culture and heritage”, something which I learned was very dear to Paul’s heart.

I had seen Paul on TV once or twice before, and admired his insightful and engaging style. We bumped into each other once or twice a year at conferences, or walking along Lambton Quay. I regularly listened to podcasts of his ‘Virtual World’ discussions with Jim Mora on Radio New Zealand.

Many of us in the Internet, open government, and open data space spent much of our formative years in the digital world. Playing video games as kids and teenagers, hacking on early home computers, and reading cyberpunk novels. The digital world had colour, and sound, but it was garish, tinny, maybe even a bit sterile.

What I loved about Paul Reynolds was the way he brought texture and richness to the digital world. He had a unique way of connecting the beautiful, tactile, physical, and even musty nature of art galleries, museums, and libraries with the expression of knowledge in digital environments. He seemed to understand the innately human aspects of both, and bridge them in a way no one else could.

He understood the relationship between content, people, and place in the physical world, and effortlessly applied that understanding to technology, the web, and social media. He did so in a way that was wry, amusing, and both pragmatic and visionary. He explained new things in ways that were easy to understand, often simultaneously with the excitement of a 7 year old boy, and the wisdom of a 70 year old man.

Paul, with your beautiful lilting accent, your expansive mind, and your love for literature, art, culture and technology, you gave the digital world texture, smell and sound. You shall be missed.

Action over words – combining electronic and analogue facilitation

At the Open Government Data Barcamp this Saturday I was asked to facilitate the closing session. The purpose of the session was to come up with a shortlist of projects to be worked on the next day at the hackfest. Nat Torkington, while not physically present at the event had been looking over our shoulder virtually on Twitter, and had beseech-ed us to leave the weekend with some real things built. How on earth was I going to pull this off?

There were 160 people at the Barcamp, and three rooms, a large auditorium, a medium sized room, and a cafeteria. Earlier in the day I’d facilitated a session on environmental data management in the medium sized room, with about 40 people. That was about capacity for that room, so I really had to use the auditorium. The challenge with facilitating in an auditorium style setting is that it’s very hard to get people up and moving to do Post-it note clustering exercises, and small group work is impossible. I only had 45 minutes to get suggestions brainstormed and short listed, and I wanted to involve everyone in the process.
During the day the 60% or so of people with Internet connected devices (iPhones, laptops and netbooks) had been twittering the event using the #opengovt tag. I’d been keeping an eye on all the tweets using Twitterfall.

So I decided to experiment with a hybrid electronic/analogue approach. I got the people with Internet connected devices to sit in the middle of the rows, and those without to sit at the edges. I then got a couple of people to hand out Post-it notes and pens to those without devices, and asked them to write suggestions for projects to work on tomorrow, one per Post-it. I also asked those with devices to tweet the suggestions using the #opengovt tag.
I then had the Twitterfall projected onto the large screen so everyone could see the suggestions rolling in. There was one every 30 seconds or so for a good 15 or 20 minutes. Dan Randow and Jonathan Hunt were on stage with laptops summarising the suggestions on the open.org.nz wiki.

Once people had finished writing suggestions on the Post-it notes I got those on the edges of each of the rows up on stage, and got them to put the Post-its on a wall I’d covered with large sheets of paper. I gave them the standard instruction to ‘put like with like’ and keep moving the Post-its until they had stabilised into categories. Two of the people were given vivid markers and asked to draw circles around the groups of Post-its and give each group a title.

After this was all done we had a set of suggestions, with an emerging set of priorities based on the categories of Post-its and the frequency of suggestion tweets on particular topics. I took photos of the Post-it clusters and emailed them to Mark Harris who later that evening summarised it all down to six projects for the Hackfest.

The projects were:

In the morning these were written up on big sheets of paper at the front of the room, and Mark asked for expressions of interest in working on each project. The cat.open.org.nz project didn’t need any work currently, as it was waiting on software from the Sunlight Foundation to be ready to migrate to. The Transport project was seen as a bit difficult to achieve on the day, so the remaining four projects were selected, and a table assigned for each project. People got to work, and the results so far can be seen by following the links above.

For a much more comprehensive write up of the whole Barcamp, see Julie’s fantastic post on Idealog.

3 Pillars of Open Government

Can politicians embrace social computing in a way that is open, honest and truly participatory, rather than simply cynical bandwagon jumping? Was David Cameron, UK opposition leader wrong when he said that “too many tweets might make a twat“? It seems so.

The visit of Senator Kate Lundy to New Zealand, and the talk she gave to a packed room at Archives NZ on the evening of 26 August, proved, irrevocably, to me, that at least one politician is using social computing in a very powerful and authentic way.

Here’s what Kate had to say:

The ’3 Pillars of Open Government’ are:

  1. Citizen Centric Services
  2. Facilitating Innovation
  3. Open and Transparent Government

1. Citizen Centric Services

There are three tiers of government in Australia, local, state and federal. One of the big challenges is achieving an appropriate level of coordination between these three tiers, so you as a citizen you are not mired in the mesh of bureaucratic red tape. For example, even moving house and getting a new broadband connection can hit each of these three spheres.

How do we deploy geospatial data and geocoding data held by government? One site that demonstrates this is the Australian stimulus package projects and investments. where you can tap in your postcode and it will show you the projects in your area, how the money is being spent, and how the projects are going.

How do we engage citizens in the process of service delivery? The Australian Govt2.0 taskforce is the way the current government is codifying the potential uses of Web 2.0 technologies to facilitate citizen engagement. Government agencies are large bureaucracies that often act as silos. The Govt2.0 taskforce aims to provide input to Cabinet on a number of policy ideas that would never have come up from individual agencies, or even through a set of agencies working together. It includes a blend of both public and private sector leaders in digital innovation. The taskforce reports in December and has been asked to come to Cabinet with some excellent ideas that can be implemented immediately, and some examples of exciting things we can do in the future. The taskforce is focusing on showcasing innovations that are happening in the public sector and then can be emulated, mashed up and remixed. Kate said that “Unless we create environments where we can ask citizens how they want things done, we’re crippling our ability as a nation to innovate.”

2.  Facilitating Innovation

The Govt2.0 methodology was designed as an example of facilitating innovation through digital technology. The core focus of facilitating innovation is about opening access to government data so both public and private institutions can build useful services and tools on top of it. This adds value to the datasets, as well as providing better ability for collaboration between the government and broader community. An example of this in action was the recent emergency management response and coordination in the Victorian bushfires in Australia.

Kate mentioned the report that’s just been released in NZ on the significant economic benefits of open access to spatial data. In a digital environment, technologies enable collaborations that provide economic benefit and can enhance the way government works. It’s about not being afraid of sharing.

3. Open and Transparent Government

All constituencies want greater accountability from Government.

Australia has made a decision at Cabinet level to change the default position of government in relation to public sector information. Government now will make everything publicly available unless there is a reason not to. There are still complexities and costs around the Freedom of Information Act, and these are a profound barrier. The policy statement from Cabinet however changes everything. A default position of openness is a great place to be. The time the most dynamic change is possible is during a change in government, and during a recession.

Australia has a reform of the Freedom of Information Act legislation underway in order to reduce the complexities and costs of information that would otherwise be publicly available. Their National Archives policies on openness have helped with this process. They also have an Information Commissioner Bill before parliament currently, Kate believes that this role will be quite central in guiding agencies to make their information more accessible in a digital environment.

She said “Open standards are absolutely critical, they are tax payers’ insurance against government project cost blowouts in the future.”

Kate made an interesting and important distinction between transparent and accessible government, and transparent and accountable politicians. The line between these is a bit blurry at the moment, and that  conversation needs to be furthered at a public policy level. There is a need to separately understand agency public consultation through social computing technologies, and politicians using the same method to create more open conversation with their constituents. This will get very interesting when the advice politicians are getting from their agencies/officials conflicts with the advice they get from open, social computing enabled engagement with citizens.

I was hugely impressed by Senator Kate Lundy’s enthusiasm, passion, and belief in the viability of increasing openness in government. More on her innovative PublicSphere methods in a subsequent post.

We need to start from the cold blooded premise that almost everyone is a genius - not that almost everyone is worthless.
John Taylor Gatto