Testimony of Michael I. Shamos before the
Election Assistance Commission Technical Guidelines Development Committee
Subcommittee on Computer Security and Transparency
September 20, 2004
My name is
Michael Shamos. I have been a faculty
member in the
The Guidelines Development Process
Before we can even begin to discuss what evaluation guidelines ought to be, we have to discuss the process of developing guidelines in the first place. I want to start right away by arguing that the process I am involved in here in this room today is not the right one. Interested parties should not have to wait for an invitation to a hearing, then travel to the hearing at their own expense to explain their possibly complicated opinions in a few minutes via a formal statement. I’ve heard the claim that the process is open because there will be a defined period in which written public comment will be permitted. That resembles the federal regulatory process, which is well understood by lobbyists and special interest groups, but is hardly conducive to participation by the general public. And I hope that the IEEE model, in which draft standards are available on payment of a fee, is not adopted.
Such conditions do not promote a genuinely open process. It may be true that there a numerous guidelines in other fields in which the public has little or no interest, such as standards for cement to resist external sulfate attack. Voting is a topic that affects every member of the public. It is a subject in which the public is not inclined to trust anyone, including politicians, government officials, computer scientists and standards makers. If they don’t trust us, they must have a fluid way to participate in the process.
Once published, standards tend to
become fossilized. It is only of minimal
help to refer to such standards merely as “guidelines.” As more manufacturers invest in designs and
technologies that conform to given guidelines, they become highly resistant to
allowing changes to be made, and certainly do not encourage them. This may account for the fact that the
current Federal Voting System Standards do not address at all many computer
security threats whose risks became known after the standards-making process
was well underway. The result is that
insecure systems routinely attain qualification when tested to these inadequate
standards. To the extent that
jurisdictions rely on federal qualification as a substitute for thorough
certification, the FVSS have actually reduced the level of voting system
security in the
Guidelines must be responsive to development in the field. If new security exploits become known, the guidelines must be revised quickly to plug them. Otherwise, we will repeat the current situation, in which lawsuits are being brought to prevent the use of systems that were previously certified and might well continue to satisfy current certification criteria.
I wouldn’t be complaining if I didn’t have a proposed solution. I believe that the Internet model of Requests for Comment and the resulting development of guidelines through sustained participation by any knowledgeable and interested party, without barrier or formality, is the right one.
Will time-varying guidelines make it more difficult for vendors and jurisdictions to conform them? Yes, but that is the cost of keeping up with an ever-widespread and lever intruder community.
Scope of the Guidelines
Voting is a process that begins with registration and ends with death. At least, it should end with death but frequently does not because of the weakness of registration system in properly purging voters form the rolls. This is only one example in which external administrative factors influence the integrity of elections. Security guidelines must take into account the fact that voting is really encompasses over 10,000 processes supervised by tens of thousands of people and administered by over a million poll workers of varying levels of interest, trustworthiness and training. Having a completely secure voting machine is not sufficient if ineligible people are allowed to vote on it, or if eligible people can vote more than once.
Any examination of security should be comprehensive and should not focus exclusively on any one technology or method of voting. The claim is made, for example, that DRE machines are insecure, so therefore we should return to optical scan voting. But no one has done a security evaluation of optical scan systems, particularly with respect to physical custody of the paper ballots. These ballots spend long periods of time out of sight of the public and, in many cases, outside the sight of any election judge. They are sealed with plastic seals that are easily duplicated. If the Guidelines are to deal effectively with optical ballot, they must take ballot-handling into account.
Nature of the Guidelines
The FVSS are pass/fail in nature. Either a system meets a standard or it doesn’t. Such a threshold system provides scant incentive to a vendor to exceed a standard, let alone expend additional funds to develop improvements. David Chaum, who will speak later today, suggests that the guidelines should not be binary, but should contain a rating mechanism so the quality of various systems can be evaluated along several dimensions. I am in wholehearted agreement with this idea because it allows comparison of different voting systems and methods, and provides an incentive for vendors to surpass one another.
There is much disagreement among experts as how good a system must be to meet a guideline. Some security specialists seem to suggest that a system must be perfect in order to be used in an election. That is, it can exhibit no vulnerabilities whatsoever. If that is indeed the test, then we can all go home, for the task will be futile. Others propose that since voting is a matter of national security, the same level of vigilance must be maintained as we use to maintain operation battle plans or nuclear launch codes. That may be so, in which case the assumption should be stated explicitly and we should be prepared to bear the expense of containing top-secret clearances for 1.4 million poll workers many of whom, it turns out, have criminal records, which is not, in most states, a barrier to handling election materials.
Audit Trails and Their Role in Security
There has been much written and spoken about audit trails that is incorrect and that should be corrected because of the important role audit trails play in security. First we must draw a distinction between an audit, which is the process of verifying that no irregular events have taken place during the election, such as resetting the machine counters during voting, or counting a precinct twice, and ballot reconstruction, which is retrieving the individual ballot images of each voter who has voted, usually for recounting. Let us call this latter a “ballot trail” and the former an “audit trail.” We need separate terms for them since they are so often confused.
The argument is made that a ballot trail must be on paper since no electronic trail can be trusted. The argument is wrong in several respects. First, whether or not a ballot trail can be relied upon depends on whether it was created correctly and whether it has been preserved without tampering. While voter verification of a ballot trail may indicate that it has been created properly, it affords no assurance that its integrity will be maintained after the voter has left the polling place.
Second, an electronic ballot trail is fully reliable if it is well-designed, can be tested and is impervious to attack. On the other hand, if the ballot trail, paper or electronic, has been compromised, it is of no value except that it may provide evidence to forensic investigators of how the compromise was accomplished. The widespread movement toward paper ballot trails has essentially foreclosed necessary research into alternatives.
Some computer scientists have alleged, without providing even a convincing argument, let alone a demonstration, that DRE machines do not allow a “meaningful recount.” Certainly if the trail mechanism has been infiltrated, the trail will not permit a recount, but this is also true of paper. A functioning electronic ballot trail mechanism is more reliable than any paper one can possibly be.
Despite vendor claims to the contrary and the evident belief of some election officials, there presently exists no commercial implementation of a voter-verifiable paper trail. There are paper-trail machines, to be sure, but each of them inserts bar codes, numbers or cryptographic indicia onto the ballot that the voter cannot decipher or understand, let alone verify. These indicia, supposedly used to prevent introduction of spurious ballots, can in fact be used to invalidate perfectly proper one, and the voter will never be any the wiser.
Some current commercial paper trail
implementations have such severe flaws that they violate the laws of the very
states in which they have been so hastily adopted. The Sequoia system used in
The Nevada Revised Statutes state, at Section 293B.065, that the “voting system must secure to the voter privacy and independence in the act of voting.” Possibly this was interpreted by Nevada officials mean that you have privacy during the actual act of voting, but after you leave the polling place it’s okay for poll workers to review how everyone voted.
Audit trails, as distinguished from ballot rails, are a different matter. They normally maintain, when properly implemented and not compromised, a record of events surrounding the election, such as the opening of polls, loading of ballot styles, recording a vote, performing administrative functions, etc. These are necessary to reconstruct the steps that were taken during the election so that procedural integrity can be assured.
is widespread that a ballot trail can be used to accomplish a reliable
recount. Aside from recounts that are
mandated by law, such as the 1% manual recount in
Security cannot be evaluated without a well-articulated threat model. The reason is that without such a model it is impossible to determine whether the system adequately resists the threats. The model is just as important as the standards that relate to it. Unless the model is comprehensive and agreed upon, any resulting standards will induce a false sense of security even if they are met.
Let’s be serious. No set of static equipment standards will do the job. Every system is used within a real-life context of state, county and polling place culture, laws and procedures. Election officials, inspectors, watchers, voters and politicians all operate with differing degrees of vigilance and experience. The notion that some collection of standards will suffice to ensure voting system security is a pipe dream.
system must be evaluated in situ. Whether a system meets the guidelines cannot
be determined by sending machines to a laboratory for a period of time. The
procedures, policies, safeguards that are actually used must be evaluated. It is conceivable that a given system will be
secure when used in
This is not to say that laboratories have no role, but that role must be carefully understood and circumscribed. Currently the FVSS afford few procedures for the laboratories to follow in testing voting systems. This means that even if the labs are completely diligent, when they certify that a system has met the standards the public has no way to find out what the lab did to verify that fact, and what deficiencies were observed. Aside from being opaque, the process is also inadequate because it does no include observations of use of the system in practice and does not include “red team” exercises designed to uncover serious flaws that may not technically violate the standards.
To summarize, my most important recommendations are that the security of a voting system must be evaluated holistically as an entire system rather than as set of hardware and software components. The process for developing guidelines must be as open and inclusive as possible and that the guidelines should deal with quantitative measures of performance, not simply with pass/fail requirements. I thank you for the opportunity to address the Subcommittee here today.
Biography of Michael I. Shamos
Michael I. Shamos is Distinguished Career Professor
Dr. Shamos received an A.B. in Physics from
From 1980-2000 he was statutory
examiner of computerized voting systems for the Secretary of the
Dr. Shamos has been an expert
witness in two recent lawsuits involving electronic voting: Wexler v. Lepore
Dr. Shamos has not received any federal grants or contracts during the past fiscal year.
Further information is available at http://euro.ecom.cmu.edu/shamos.html.
 Cliftin, J.R. et al., “Standards for Evaluating the Susceptibility of Cement-Based Materials to External Sulfate Attack.” Available at http://fire.nist.gov/bfrlpubs/build99/art083.html. No disrespect is intended – the point is merely that a cement standard is unlikely to excite the interest of the general public.