top of page

The Power of New Technology


data brain

When Australia’s Chief Scientist presents his considered view on the future of artificial intelligence in Australia it is worth paying very close attention. In his opinion piece in the weekend SMH, Allen Finkel is very clear about the key issues and, by inference, very clear about the ideological line the Turnbull government should take. The Chief Scientist is forthright in his condemnation of oppressive surveillance; he specifically reflects on the anti-semitism of Nazi Germany experienced by his relatives. But, most basically, Finkel thinks there is one key question that will determine the broad direction of future outcomes: ‘what kind of society do we want to be?’ He goes on to conclude ‘[o]nly Australians, together can say’ (Allen Finkel ‘Power of new technology raises critical question’, SMH, News Review, July 21-22, 2018, p.29).

Any government would agree that this is the kind of classic one liner that makes Chief Scientists worth paying. In one line (one question, one answer) every issue is deflected into the future tense, and into the kind of homily that politicians can (and do) run with when it’s all too hard.

Of course, it would be nice if Australians, all together, could say anything. But as we all really know this is classic political hyperbole. Australians can vote: but any democratic outcome does not mean that there is, or ever could be, one voice. That’s the point: Australia is a liberal democratic society that tolerates multiple points of view. At the same time we all put up with the fact that the future of AI will be determined in hierarchical power structures dominated by big businesses (like Apple, Google, Facebook and Amazon), government ministers and their departments (and if we’re lucky, expert committees and Chief Scientists, giving advice - and being paid in salaries, consultancies, and research grants, for the privilege). Right at the bottom of the decision making heap there is the voting public. This is as good as it gets in modern western democracies.

Whether or not changing governments, or holding strong popular opinions, can force the details of public policy on AI will be the unpredictable outcome of complex interplay between key institutional stakeholders and the organisations and individuals who can be bothered agitating for change. That is, privacy outcomes in public policy are unpredictable and will depend on pushback against the easy accessibility of our personal data. The broad direction of new technology and public policy is, however, extremely predictable: there will be more labour saving machinery (eg. robotics), and there will be more sophisticated communication technology dependent on big data.

The role of social processes (democratic and undemocratic alike) in the development of hardware and data processing algorithms can never be entirely predictable. Social and political developments are less predictable than the applications of laws and theorems in scientific research. Therefore, it would be wonderful if expert scientific advice could be a little more nuanced about subjects like society and democracy and defer more to those who specialise in the conflicted realities of social life.

It is obvious to many observers of government processes that because scientists (like Allen Finkel) are professional specialists, they so often make two kinds of ritualistic move. They either defer to other experts, including politicians and other scientists (and sometimes social analysts), or they presume to be experts about everything (including ‘society’). Often they do both, which appears to be the current position of Allen Finkel on the subject of the control of big data by public policy.

Governments and their advisers need to take note of these now fairly common perceptions of expertise and its role in government. They also need to grasp the current nettle: together, governments and expert advisers need to come up with legislation that will put AI providers and large corporate and government users ‘back in the box’. This is what increasing numbers of Australian citizens want.

The current fiasco that is the Australian Government’s My Health Record website underscores many of the issues raised so far. To paraphrase the recent conclusions of the Australian Privacy Foundation: if it looks like a government surveillance system, and operates like a government surveillance system, it probably is a government surveillance system. The intrusive nature of this website should be clear from the government’s change in tactic to allow individual medicare card holders to ‘opt out’ (as opposed to ‘opting in’). Unfortunately, if an adult has children covered by the one card, there is no provision for these children to opt out. Tough: in for life.

This website also cannot provide clinical advice. All it can do is provide about 900,000 individuals access to some of your health records. Pam Dixon, executive director of the World Privacy Forum says, ‘[i]f 900,000 individuals can gain access to a record, that means there are 900,00 potential misuses of that record. That is an unusual threat vector; you don’t usually see this in other healthcare systems’ (Jennifer Duke, Ben Grubb and Esther Han, ‘To opt out or not: split on health data’, SMH, July 21-22, 2018, p. 21).

All this is symptomatic of what the future of AI in Australia, and elsewhere, is likely to involve: a series of ambit claims by big data users and providers on our personal data. Further, we should never forget that most scientists and politicians are wedded to big data. As Allen Finkel’s writing so clearly implies, it’s up to the voting public to force change; we can’t expect experts, expert systems, and governments, to legislate themselves into limited data access; that would involve a clear conflict of interests.

Nonetheless, in liberal democratic societies like Australia, we depend upon experts to advise governments, and the general public. The Chief Scientist and expert committees, like the Human Rights Commission, should present opinions and analysis - and should be able to be forthright and fearless in the process. We await the Commission’s forthcoming issues paper on the impact of technology (including AI), on human rights with bated breath.

Tom Jagtenberg

Featured Posts
Recent Posts
Search By Tags
bottom of page