Posts by David García:

Mum, I want a nuclear power station

Outsourcing vs in-house development

Companies always have to choose between various options to reduce costs and maximise benefits. They must decide whether to rent or buy offices, to externalise or otherwise certain business functions: with outsourcing, offshoring and other strategies that, in the end, help develop the core competence of the company without submitting it to increases or decreases in the size of the workforce or property required according to demand.

In the IT field, such decisions are normally related to software development: the typical ‘do we do it in-house or externally?’ question. But it is not very common to apply similar strategies to other areas in the IT field.

In the area of IT infrastructure (where the company’s services run), such thoughts are often dismissed because the criticality of the applications leads to thoughts that it is not a good idea to delegate their operation, not even a small part of them, to a third party for fear of losing ‘total control’. Right now, and with increasing frequency – above all in the internet company sector – there is an increasing perception that it is not necessary for the company to develop its own infrastructure to run its applications. Some even decide that they don’t even need their own infrastructure to develop their core business.

If we translate the problem to a more common equivalent – when we flick a switch to turn on a light, we want illumination and we don’t really care where the electricity is produced, we simply want the thing to work. We don’t need a nuclear power station at home to light a bulb, but if we have a company and electricity is the key to developing our business, we will surely want to maintain the electricity supply in case of failure. In such cases it would be more practical to think of having multiple failover providers other than becoming our own provider.

Utility computing: IT like water or electricity

For years this thought has been shaped by the leading computer companies who want IT converted into just another commodity, such as electricity, gas or water, for client companies.

Currently certain components such as computation or storage are being commoditised to a sufficiently mature level that they can be used by companies in production environments, something which is important because it permits economies of scale in supply companies who can therefore offer a better service at a better price.

So, we could have our application running on virtual servers which we pay for according to usage, our backups or content to serve could be on remote storage servers, meaning that we would avoid spending on backup and storage devices (the cost of a good backup system can be significant). We could serve certain content by employing content delivery networks (giving the impression of having a data centre near the client) and bit by bit externalise certain infrastructure services to specialist providers reducing costs and improving the quality of service.
We could go even further and adapt certain components to be available such that they are provided by specialist 3rd party companies. This way we would avoid the development and maintenance costs, which in certain cases can be an important part of the budget for an application. We could optimise and externalise if it is not adding value to our application.

Digital signature on demand

Within this type of components ripe for being provided by specialised 3rd parties, we find public key and digital signature infrastructure, more and more vital in applications, above all in the internet world, but which are complex and are non-trivial to develop.

In this field, there is a tendency to use existing libraries in the product or install already developed products – open source or commercial – from third parties. The problem is that development, installation and maintenance of these types of solution is not very accessible.

They’re only within reach of companies that have huge budgets and, even in these cases, they often lack the know-how or don’t budget for maintenance and continuous improvement of the infrastructure. The result is deployments that don’t represent the total cost of ownership (TCO), quickly become obsolete, are deployed in a buggy state or even aren’t deployed at all.

The field of public key infrastructure was the first of the two to make a move and now today certain activities, such as the role of issuing certificates, is limited to a small group of specialist providers, given that it is an activity that requires an large amount of money and strength to put into production.

People are still waking up to the complexity of digital signature infrastructures and while very few consider developing their own certification agency, it is common to attempt to develop a digital signature authority (be it for creation, validation, signing or storage).

A good idea is to apply a similar criteria to that of certification agencies and delegate the digital signature services to a third party and therefore concentrate on developing our main business activity.

In fact, more and more public institutions and private companies in Spain are making use of third party platforms that offer public key and digital signature infrastructure services, opening a new IT market and reinforcing the idea that such critical jobs should be centralised to specialised providers who can take advantage of economies of scale to optimise cost and maximise the quality of the end product.

We believe that this is an important idea since the domains of public key infrastructure and digital signature should be something that companies can integrate in their applications without the need for large sums of money or effort as has been the case till now. We realise how complicated it can be to solve these problems because we too have been there and for this reason we decided to open our digital signature services and offer them as third-party services because in the Internet, security should be a commodity, not a luxury.

By David García
Saved in: e-Signatures, Technology, Tractis | No comments » | 19 March 2008

Tractis technical decisions: Signature Creation Component

In the previous post, we started to describe the architecture and design decisions that we took in the development of Tractis. We talked about our experience in confronting the challenge of creating an infrastructure such as ours, in which diverse components (in function and technology) are united to form a single service.

As we explained, we could have opted to employ a single technology to solve all the problems. This way, we would have used a single language, a single paradigm and a single platform to meet all the challenges that we would encounter.

We discarded the idea because a single, optimal technology that would cover all the functional requirements didn’t exist.

So, we opted to use Ruby on Rails for the front end web applications, given it’s enormous potential for the development of such applications.

But it wasn’t just because it’s fast, but also for it’s robustness and excellent capacity to evolve and grow. But the frond end, although most visible to the users, is only a part of our infrastructure.

We encountered a series of technical and architectural decisions that were outside of what Ruby On Rails could offer us in the area of one of the most valuable functions we offer: the signing of contracts.

The signing process

The people who use Tractis should be able to sign documents but not any old way, but with the necessary guarantees. These statements may seem obvious and vague but if we stop to analyse the technical consequences of them, we will see that they are not so trivial.

Our users are people who use different signature technologies such as tokens, identity cards or software certificates that are issued by different agencies, are in different countries and are running different browsers and operating systems.

In addition, the signatures should be reliable so therefore we should give the maximum guarantees such that the signer is able to use the signature, validating it later to avoid fraud.

Afterwards, we should archive it in a secure manner, applying electronic means of long-term preservation so that in the case of future disputes, we can provide the necessary evidence.

These conditions clearly delimit diverse complementary components of our infrastructure: a signature creation application, a validation application and a signed documents custodian application.

Talking about all the components would be too much for a single post so I will spread it out into various to make each less dense in content.

I will start by explaining our signature creation component to see with a bit more detail all that is involved in the act of ‘pushing the sign button’.

Signature creation component

The signing process is different from the commonly used process on the internet in that it inverts the producer/consumer paradigm. Normally browsers request content from the servers and once received they take charge of rendering it to the client. In the case of signing, the server requires an action from the client (that they do the signature) and once realised, it will validate it and store it.

The process is like this because the keys with which the client will sign are in the cryptographic device, hardware or software, and these don’t ever leave the client machine at any moment, with the result that the custody of the keys is performed wholly by the user.

As we’ve stated previously, the client does not have a concrete technological profile from the point of view of the scenario in which they perform the signature.

So we can have a client that signs with Internet Explorer using certificates stored in their electronic ID card (in Spain this could be the DNIe electronic national ID card) while another might sign using Linux with a cryptographic token from Firefox and in both the experience and result should be the equivalent.

The scenario gets more complicated as we try to cover more variations, given that the complexity is multiplied with each new actor that we introduce. So we couldn’t opt for a solution tied to a specific technology.

There are many good signing components tied to specific technologies or browsers such as the Active X signing components from Microsoft for Internet Explorer or Mozilla signing libraries that can be used via JavaScript.

The problem is that these components are tied to the technology on which they run and this implies that they try to introduce new browsers/operating systems into the mix, with the result that the development and maintenance costs are terrible.

And then Java arrived

One of the main benefits of Java – or at last at it’s inception was one of the most lauded – is that you only have to develop once, compile once and the result can be distributed and run on any machine.

However, the fact that you need a Java virtual machine to run the code is rarely mentioned.

Even so, the fact that you can develop once and run on whatever platform, above all in the Internet world, is amazing. Unfortunately, client-side Java on the Internet is something that began strongly when the Applet was a respected component but it has fallen to an undeserved state of disuse.

In our judgement, one of the most common mistakes with Applets is that they are used for tasks for which they are not well suited such as for a visual component – a task for which exists superior technologies such as Flash.

Nevertheless, Applets have a massive potential in fields closer to business logic, away from the presentation of content given that you can run Java applications inside the browser. If we can say anything about Java it’s that it makes available an incredibly well designed system of cryptographic libraries that can be fully used inside Applets.

Java and the digital signature

One of the things that made us opt for Java as the technology for implementing the signature creation infrastructure was the excellent support that it has for cryptography, and in particular, for digital signatures.

It not only makes available different cryptographic providers so that you don’t have to limit yourself to those in the provider that Sun supplies – you can employ providers from different manufacturers and even from different natures and all this without altering the logic of your application.

This means that we can have, by means of a single code base, a component that performs cryptographic operations that support different types of cryptographic devices and on different platforms.

So, the complexity of dealing with different devices is assumed by the virtual machine and we can see their management at a much higher level, avoiding adapting our code to ad-hoc functionality provided by each digital signature technology.

In this way we then work with the keys, certificates and signatures according to defined interfaces and at a very high level, without the need to descend to details about how or where certficates and keys are stored.

This abstraction also results in simplicity, robustness and the reduction in size of the applet, a big advantage if it is a component that the user should download to their browser before signing.

To dive into the topic of Cryptography in Java and for anyone who has interest in the area, I recommend that you consult one of the wide document bases about cryptographic architecture in Java such as that edited by Sun or from dedicated groups such as Bouncy Castle.

Using Java 1.6

One of the most complicated decisions to take in the area of applications that require the use of Applets and Java in general is which should be the minimum version of virtual machine supported. Here, if you choose a version too new, your users have to update their virtual machine whereas if you choose to support obsolete virtual machines you miss out on the functionality offered by later versions.

In our case we opted to use the latest version of Java Virtual Machine available right now – 1.6 (or 6.0). This version brings a large number of improvements in the area of cryptography above all in the field of support for different technologies for storing certificates.

Using this version we can support signing using certificates shored in certain types of repositories that were not previously available in the virtual machine – only in additional libraries – like, for example, those that Windows exposes via it’s Crypto API.

Supporting this type of technology is vital to comply with the requirement to try to bring Tractis signing to the largest number of users possible given that a large percentage of the current market for certificates are stored using this technology.

Conclusions

In this first post on the signature components we have briefly outlined some of the reasons that led us to adopt Java as the technology for the development of our signature creation component. We have also described why this component is local and how it is possible for us to support multiple browsers/operating systems.

In the subsequent posts we’ll discuss similar design decisions in the validation system and the custody of signatures that we employ as part of our digital signature back end, closing the circle of components that we employ to insure the contracts in Tractis.

By David García
Saved in: Announcements | No comments » | 4 February 2008

Launch of Tractis Identity Services

This week we unveil the new “Tractis Identity Services” and make it’s API publicly available so that 3rd party people and sites can use them.

These services allow you to use digital certificates to identify other people online. To use this service, you just need a Tractis account and to follow a few simple steps described in the Tractis API documentation (only available in English at the moment).

dibujo-ingles-3.png

Use cases

The “Tractis Identity Services” revolve around two use cases:

  • Synchronous Identification: Allows you to identify your website users by digital certificates. For example: log-in to your portal with a DNIe (Spanish Electronic ID card). This would be the process:
  1. When the user needs to authenticate themselves, they are redirected from your site to Tractis.
  2. Tractis asks the user to identify themselves using their digital certificate.
  3. Tractis verifies the status of the certificate against it’s validation authority.
  4. Tractis returns the result of the identification to you and redirects the user back to your site.
  • Asynchronous Identification: Allows you to link identities to email addresses. This means you can reliably get the identity of the owner of an email address and therefore increase confidence and avoid fraud. For example: marketplaces (eBay, Loquo, Infojobs…) can use asynchronous authentication to verify parties or to create VIP environments that offer a higher level of confidence. The process would be the following:
  1. Your web site asks Tractis to verify the identity of a user.
  2. The user receives an email from Tractis prompting them to identify themselves via their electronic certificate.
  3. Tractis checks the status of the certificate against it’s validation authority.
  4. Tractis communicates the result of the identification back to you.

In both cases, Tractis returns the result of the identification to your website and you decide what to do with the information.

It is important to underline that this process is configurable – that is to say that your site can specify the attributes that it does and doesn’t want to obtain from the identification process. For example: you might know the name of a user and their ID card number or only their nationality or their age without having to know any other data (useful for restricting access to some content without having to ask for credit card details). Obviously all of these details are extracted from the user’s digital certificate.

So that you can see it in action, without having to do any integration work, we’ve made an imaginary use case available in ACME, our demo site. Here you can find examples of both synchronous and asynchronous identification with a test certificate so that you can check it out, in case you don’t have a real one. You can also take a look at the Tractis API site.

By David García
Saved in: Announcements, Tractis | No comments » | 26 November 2007

A preview of the Tractis API

We’re working to launch the Tractis API in the middle of November. The API will allow you to connect directly with our back-end and use digital certificates to authenticate your users, store documents, digitally sign contracts and lots more besides.

The back-end Tractis services are organized into three groups according to purpose: identity services, document management services and digital signature services.

tractis-backend-small.png

Externally accessible services

1. Identity services:

  • Identity Federation Server: Allows single sign-on to Tractis services. A typical use-case is where a customer of a Tractis organization uses multiple services but only has to identify themselves once (to the user organization).
  • Identity and Attribute Authority: Allows the management, certification and verification of attributes of individuals and organizations
    • Management: Allows definition of access control on different attributes, according to their nature.
    • Certification: Allows connections to 3rd party attribute repositories and the presentation of challenges of the holders of an identity (example: authentication using personal certificates issued by a trusted authority).
    • Verification: Allows the lookup of information stated by the user (phone number, address etc.).

2. Document management services:

  • Contract Management System: Allows complete contract lifecycle management.
  • Long-term Archive: Allows long-term storage of documents, guaranteeing their future reproducibility, integrity and authenticity.

3. Digital signature services:

  • Semantic Validation Authority: Verifies the validity of electronically signed documents from a legal and technical point of view. Supports advanced digital signatures based on the AdES format.

The “Evidence Manager” acts above all these services. It stores and preserves evidence for future investigative processes. The evidence manager allows us to show evidence to 3rd parties with regarding the operations performed by the different services.

Internal-only services

As you can see from the diagram, all these services use a series of internal components to guarantees (integrity, durability etc.) all operations performed by the platform.

  • Time Stamping Authority: Applies time stamps to electronic documents, permitting demonstration of document content at a given moment in time.
  • Trusted Time Sources: Providers of Date/Time information synchronized through multiple channels (i.e. internet, phone…) with official sources of time such as the Real Observatorio de la Armada (Spanish Royal Navy Observatory) – official time in Spain.
  • Attribute Repository: Allows storage of user attributes and roles (responsibility, membership of professional organizations…) and makes them available them to the applications that request for them.

Finally, Tractis integrates with Certification Authorities around the world.

Why use this functionality?

We are going to open up the functionality progressively. Initially, we’ll open parts of the “Identity and Attribute Authority” and the “Contract Management System”, which will allow:

  1. Authenticating users via digital certificates (Spanish electronic ID card – DNIe – included), free of charge, from your website.
  2. Automating the bulk sending of personalized contracts to your customers, asking for their digital signature.

All these services will be offered remotely by Tractis. You don’t need to develop, install, configure or maintain an infrastructure valued in millions of euros that is within reach of only the biggest banks. You only need a Tractis account and to integrate with the API.

By David García
Saved in: Announcements, Identity, Programming, Tractis | No comments » | 31 October 2007