Archive for the ‘licensing’ Category

Peer Production License

Wednesday, May 28th, 2014

Recently discussions about new licensing models for open cooperative production have come up (again). This discussion resurrects the “Peer Production License” proposed in 2010 by John Magyar and Dmytri Kleiner [1] which is also available on the p2pfoundation website [2] although it’s not clear if the latter is a modified version. The license is proposed by Michel Bauwens and Vasili Kostakis accompanied by a theoretical discussion [3] why such a license would enhance the current state of the art in licensing. The proposal has already sparked criticism in form of critical replies which I will cite in the following where appropriate.

The theoretical argument (mostly base on marxist theories I don’t have the patience to dig into) boils down to differentiating “good” from “bad” users of the licensed product. A “good” user is a “workerowned business” or “workerowned collective” [2] while a “bad” user seems to be a corporation. Note that the theoretical discussion seems to allow corporate users who contribute “as IBM does with Linux. However, those who do not contribute should pay a license fee” [3] (p.358). I’ve not found a clause in the license that defines this contribution exception. Instead it makes clear that “you may not exercise any of the rights granted to You in Section 3 above in any manner that is primarily intended for or directed toward commercial advantage or private monetary compensation”. Finally it is made clear “for the avoidance of doubt” that “the Licensor reserves the exclusive right to collect such royalties for any exercise by You of the rights granted under this License” [2].

With the cited clauses above the “new” license is very similar to a Creative Commons license with a non-commercial clause as others have already noted [4] (p.363). Although the missing clauses in the license for a contribution exception for non-workerowned collectives or businesses is probably only an oversight — Rigi [5] (p.396) also understands the license this way — this is not the major shortcoming.

For me the main point is: who is going to be the institution to distinguish “good” from “bad” users of the product, those who have to pay and those who don’t? The license mentions a “collecting society” for this purpose. Whatever this institution is going to be, it might be a “benevolent dictator” [6] at the start but will soon detoriate into a real dictatorship. Why? As the software-focused “benevolent dictator for life” Wikipedia article [7] notes, the dictator has an incentive to stay benevolent due to the possibility of forking a project, this was first documented in ESRs “Homesteading the Noosphere” [8]. Now since our dictator is the “Licensor [who] reserves the exclusive right to collect such royalties” [2] there are other, monetary, incentives for forking a project which has to be prevented by other means covered in the license. Collection of royalties is incompatible with the right to fork a project. We have an “owner” who decides about “good” vs. “bad” and uses a license to stay in power. A recipe for desaster or — as a friend has put it in a recent discussion “design for corruption” [9].

Other problems are the management of contributions. As Meretz has already pointed out, “only people can behave in a reciprocal way” [4] (p.363). Contributors are people. They may belong to one instution that is deemed “good” by the dictator at one point and may later change to an institution that is deemed “bad” by the dictator. So a person may be excluded from using the product just because they belong to a “bad” institution. Take myself as an example: I’m running an open source business for ten years “primarily intended for or directed toward commercial advantage or private monetary compensation” [2]. I guess I wouldn’t qualify for free use of a “peer production license” licensed product. One of the reasons for success of open source / free software like Linux was that employees could use it for solving their day-to-day problems. This use often resulted in contributions, but only after using it for some time.

Which leads to the next problem: The license tries to force “good” behaviour. But you must first prove to be “good” by contributing before you’re eligible for using the product. As noted by Rigi the “GPL stipulated reciprocity does not fit into any of these forms [usually known to economists (my interpretation)]” [5] (p.398) because a contributor always gets more (e.g. a working software package) than his own contribution. This exactly is one if not the main reason people are motivated to contribute. Openness creates more ethical behaviour than a license that tries to force ethics. Force or control will destroy that motivation as exemplified in the Linus vs. Tanenbaum discussion where Tanenbaum stated:

If Linus wants to keep control of the official version, and a group of eager beavers want to go off in a different direction, the same problem arises. I don’t think the copyright issue is really the problem. The problem is co-ordinating things. Projects like GNU, MINIX, or LINUX only hold together if one person is in charge. During the 1970s, when structured programming was introduced, Harlan Mills pointed out that the programming team should be organized like a surgical team–one surgeon and his or her assistants, not like a hog butchering team–give everybody an axe and let them chop away.
Anyone who says you can have a lot of widely dispersed people hack away on a complicated piece of code and avoid total anarchy has never managed a software project. [10] (Post 1992-02-05 23:23:26 GMT)

To which Linus replied:

This is the second time I’ve seen this “accusation” from ast, who feels pretty good about commenting on a kernel he probably haven’t even seen. Or at least he hasn’t asked me, or even read alt.os.linux about this. Just so that nobody takes his guess for the full thruth, here’s my standing on “keeping control”, in 2 words (three?):
I won’t.
[10] (Post 1992-02-06 10:33:31 GMT)

and then goes on to explain how kernel maintenance works (at the time).

What becomes clear from this discussion is that the main focus of chosing a license is to attract contributors — preventing others from appropriating a version or influencing derived works is only secondary. Many successful open source projects use licenses that are more permissive than the GNU General Public License GPL Version 2 [11], and the new Version 3 of the GPL [12] which is more restrictive sees less use. The programming language Python is a prominent example of a successful project using a more permissive license [13]. Armin Ronacher documents in a blog post [14] that there is a trend away from the GPL to less restricitive licenses. This is also confirmed statistically by other sources [15].

One reason for this trend is the growing mess of incompatible licenses. One of the ideas of open source / free software is that it should be possible to reuse existing components in order not to reinvent the wheel. This is increasingly difficult due to incompatible licenses, Ronacher in his essay touches the tip of the iceberg [14]. License incompatibility has already been used to release software under an open source license and still not allowing Linux developers to incorporate the released software into Linux [14].

Given the reuse argument, adding another incompatible license to the mix (the proposed Peer Production License is incompatible with the GPL and probably other licenses) is simply insane. The new license isn’t even an open source license [16] much less fitting the free software definition [17] due to the commercial restrictions, both definitions require that the software is free for any purpose.

When leaving the field of software and other artefacts protected by copyright we’re entering the field of hardware licensing. Hardware unlike software is not protected by copyright (with the exception of some artefacts like printed circuit boards, where the printed circuit is directly protected by copyright). So it is possible for private or research purposes to reverse-engineer a mechanical part and print it on a 3D printer. If the part is not protected by a patent, it is even legal to publish the reverse-engineered design documents for others to replicate the design. This was shown in a study for UK law by Bradshaw et. al. [18] but probably transcends to EU law. Note that the design documents are protected by copyright but the manufactured artefact is not. This has implications on the protection of open source hardware because this finding can be turned around. A company may well produce an open source design without contributing anything back, even a modified or improved design which is not given back to the community would probably be possible.

Hardware could be protected with patents, but this is not a road the open source community wants to travel. The current state in hardware licensing seeks to protect users of the design from contributors who later want to enforce patents against the design by incorporating clauses where contributors license patents they hold for the project. This was pioneered by the TAPR open hardware license [19] and is also reflected in the CERN open hardware license [20].

To sum up: Apart from the inconsistencies in the theoretical paper [3] and the actual license [2] I pointed out that such a license is a recipe for corruption when money is involved due to the restrictions of forking a project. In addition the license would hamper reuse of existing components because it adds to the “license compatibility clusterfuck” [14]. In addition it won’t protect what it set out to protect: Hardware artefacts — except for some exceptions — are not covered by copyright and therefore not by a license. We can only protect the design but the production of artefacts from that design is not subject to copyright law.

Last not least: Thanks to Franz Nahrada for inviting me to the debate.

[1] Dymtri Kleiner, The Telekommunist Manifesto. Network Notebooks 03, Institute of Network Cultures, Amsterdam, 2010.
[2] (1, 2, 3, 4, 5, 6) Dymtri Kleiner, Peer Production License, 2010. Copy at (Not sure if this is the original license by Kleiner or a modification)
[3] (1, 2, 3) Michel Bauwens and Vasilis Kostakis. From the communism of capital to capital for the commons: Towards an open co-operativism. tripleC communication capitalism & critique, Journal for a Global Sustainable Information Society, 12(1):356-361, 2014.
[4] (1, 2) Stefan Meretz. Socialist licenses? A rejoinder to Michel Bauwens and Vasilis Kostakis. tripleC communication capitalism & critique, Journal for a Global Sustainable Information Society, 12(1):362-365, 2014.
[5] (1, 2) Jakob Rigi. The coming revolution of peer production and revolutionary cooperatives. A response to Michel Bauwens, Vasilis Kostakis and Stefan Meretz. tripleC communication capitalism & critique, Journal for a Global Sustainable Information Society, 12(1):390-404, 2014.
[6] Wikipedia, Benevolent dictatorship, accessed 2014-05-27.
[7] Wikipedia, Benevolent dictator for life, accessed 2014-05-27.
[8] Eric S. Raymond, Homesteading the Noosphere 1998-2000.
[9] Michael Franz Reinisch, private communication.
[10] (1, 2) Andy Tanenbaum, Linus Benedict Torvalds. LINUX is obsolete, discussion on USENIX news, reprinted under the title The Tanenbaum-Torvalds Debate in Open Sources: Voices from the Open Source Revolution, 1999. The discussion was continued under the subject “Unhappy campers”.
[11] GNU General Public License version 2. Software license, Free Software Foundation, 1991
[12] GNU General Public License version 3. Software license, Free Software Foundation, 2007
[13] Python Software Foundation. History and License 2001-2014
[14] (1, 2, 3, 4) Armin Ronacher, Licensing in a Post Copyright World, Blog entry, Jul 2013
[15] Matthew Aslett, On the continuing decline of the GPL. Blog entry, December 2011
[16] Bruce Perens, The Open Source Definition, Online document, Open Source Initiative, 1997
[17] Free Software Foundation, The free software definition. Essay, 2001-2010
[18] Simon Bradshaw, Adrian Bowyer, and Patrick Haufe, The intellectual property implications of low-cost 3D printing. SCRIPTed — A Journal of Law, Technology & Society 7(1):5-31, April 2010.
[19] John Ackermann, TAPR open hardware license version 1.0, Tucson Amateur Packet Radio, May 2007
[20] Javier Serrano, CERN open hardware license v1.1, Open Hardware Repository, September 2011

Ning eliminates free networks

Monday, April 19th, 2010

That Ning now no longer supports free networks has been compared to blackmailing by some.
I also think so. But to be blackmailed there are two factors:

  • somebody who wants to blackmail others
  • a willing victim to go into the trap

You have the choice: Only use a service which at least provides a way to get your data out. (To be fair, it seems Ning will be offering this according to the blog entry cited above, but the details are still unclear)
But: The data alone is nothing without the software. So you need a service where you can export the data and have open source software available to do something with the extracted data. But the first part is the crucial one: If you have only the data, software can be written…
I’ve written earlier in this blog (and talked @Manchester) about the problem of vendor lock-in in “cloud computing” which is almost the same as “web 2.0 services”, namely software as a service (SAAS). Ning falls into that category as do other social network services like facebook or Xing.
This boils down to what the open cloud initiative has defined as cloud computing openness: For open content you ideally want to go for a free cloud with open APIs, open formats, open source (software), and open data.
Note that facebook is no alternative to ning: People have been thrown off facebook for retrieving their data, cited in these two entries on Henry Story’s blog.
But the choice has to be made by customers (or non-paying users) of these services: Don’t use something where you lock in your data. Or your data might be at risk, or locked in, or dead.
Doc Searls, co-Author of Cluetrain Manifesto and Editor of Linux Journal has written about this in a blog entry called Silos End: “These problems cannot be solved by the companies themselves. Companies make silos. It’s as simple as that. Left to their own devices, that’s what they do. Over and over and over again.”
Ideally there would be a standardized service and hosters agree to use the same software (maybe customized in the appearance) to host services for users. A hosting standard for collaboration software. Starting with the services Facebook, Xing, etc. are offering today. We want an interchange format that everybody can use, export, import.
I think a standard for these types of services will leave us with a network of hosters. This — in comparison to the status quo today — will be a distributed system, maybe a peer-to-peer system, not some big players locking in users. A common standard will hopefully keep the players honest.
To get there: Lets try to evaluate replacement software for Ning. Work on interchange formats. A suitable format for contact information is the Friend of a Friend (FOAF) format endorsed by the W3C, this is part of the semantic web effort.
One software that comes close to this goal might be elgg — I’ve not tried it myself, but there is already a group of Elgg Service Providers which comes close to the goal of a support infrastructure built around an open source project.
I’ve two points of critique, one of them being more personal taste, the other related to the license. The first is that the software is in PHP. The license is the GNU General Public License which offers no protection against a service provider making own modifications to the hosted software and not releasing these modifications as open source software. Details are in my earlier article on the subject. So far, the team of elgg seems to play the game very open. The Source code with (yet) unrelease modifications to the software is freely accessible as a subversion software repository. Furthermore they offer nightly builds for download.
There are many other good points, too: It offers syndication with RSS and JSON, and has an API to interconnect with software running elsewhere — which are the basic ingredients for a distributed system. The API is Representational State Transfer (REST) that happens to be the same mechanism on which the semantic web can be built.
So lets take some steps in the direction of a system built on standardized components where no vendor can lock us in.
When we get there, we’ve left Web 2.0 behind. The future is a distributed system, lets call it Web 3.0.

Media Ecologies Conference

Tuesday, November 3rd, 2009

I’m currently at the media ecologies conference in manchester, UK. I was just talking about what tools and interfaces we need for collaboration tools (on the web). This also rehashes some of the ideas in my blog entry on cloud computing and the problems with (lack of) openness of cloud applications. The slides of my talk can be downloaded from my website.

Did Ronja Fail?

Tuesday, October 27th, 2009

Ronja, the optical data link device, is often cited as a failed open source hardware project — the last one mentioning it I just read is Lawrence Kincheloe’s excellent essay Musings Upon the Nature of Open Source Hardware as a Business at the end of his project visit summary at Factor e Farm.
Roja did fail (in the sense that it isn’t very widespread today not in the sense of being a cool open source project). One of the research studies I know of is the presentation “Ronja — Darknet of Lights” by Johan Söderberg at the 4th Oekonux conference for which Audio is available. The study is very interesting although I don’t agree with the conclusions. So why did Ronja “fail”?
Ronja’s main application was cheap internet access. At the time of its design in 2001 wireless LAN (Wifi) wasn’t yet available cheaply. And in the Czech Republic DSL wasn’t available at the time.
Now consider the technical characteristics of Ronja:

  • Up to 10MBit/s
  • Up to 1.4 km range
  • Light: Doesn’t work in fog, or other bad weather (snow)
  • Light: Hard to get the beam to the destination (direction)
  • Light: Interference with daylight
  • For full-duplex communication we need two (receiver + transmitter) devices
  • sold for around 700$ at the time (the LED alone cost 120$ you get these for .75$ now)
  • needed “a hell of a lot of time to build one” according to Söderberg

And compare these with WLAN:

  • Up to 54MBit/s
  • With good antennas several km range (I’ve built a link with 5.5km)
  • Antennas are cheap and can even be built at home, e.g., a Cantenna — you can build a cantenna in an evening
  • Works in fog and bad weather
  • we need only one antenna at sender and one at receiver
  • WLAN is very cheap nowadays, it became available (with new frequencies) in 2005 in cz.

So I think that Ronja “failed” because it was replaced by something better and cheaper that was readily available. It isn’t an example of a failed open source business model for hardware and shouldn’t be used as an example. This doesn’t mean that we already know how a business model for open source hardware should look like, though.
The idea behind Ronja — according to the Wikipedia article on Ronja “User Controlled Technology” is (mostly) achieved with WLAN technology today: We can use cheap devices and modify them (using open source firmware and homegrown antennas) to suit our needs. And there are large wireless communities now like Funkfeuer in Vienna who do their own Internet communication.

(cc)alpsSalon: open everything

Friday, September 11th, 2009

Update 2009-09-14: Marcin from open source ecology has the video online which we showed at the event — Marcin von open source ecology hat das Video, das wir auf der Veranstaltung gezeicht haben online (video in enlish only).

Heute abend bin ich mit am Podium im Creative Commons CCalps Salon im Rahmen des Paraflows Festival zum Thema Open Everything. Ich werde auf die jetzt stattfindende Anwendung der Open Source Prinzipien die wir von der Software kennen auf andere Bereiche (Open Hardware Design) eingehen. Die Veranstaltung wird vermutlich in Englisch geführt, da Michel Bauwens, der Gründer der P2P Foundation dort sein wird.
This evening I’ll participate at the Creative Commons CCalps Salon an event in the context of the Paraflows Festival with the topic Open Everything. I’ll talk about applying the principles of Open Source we know from software development to other areas (like Open Hardware design). The event will probably be in english since Michel Bauwens, founder of the P2P Foundation will be there.

Zitat aus der Ankündigung (only in german, sorry):

Nur wenige Menschen sind in der Lage die Frage "Was ist open everything eigentlich?" auf befriedigende Weise zu beantworten, der Überblick, der durch die mind map präsentiert wird, bildet die Basis für die eigentliche Erklärung. Daher hat der (cc)alpsSalon MICHEL BAUWENS eingeladen, diese Frage zu beantworten und einen Überblick über vergangene und gegenwärtige Entwicklungen in Zusammenhang mit dieser Idee zu geben und die Potentiale aufzuzeigen die für jeden gegeben sind, der/die offene Materialien, Quellen, Designs – einfach alles – anbietet und nützt.
Eine der beeindruckendsten Ausführungen dieses Ethos ist open source ecology (OSE), ein Projekt, das darauf abzielt eine open source Gemeinschaft zu schaffen, die sich auf Nachhaltigkeit, ökologische Verantwortung und die Freiheit des Individuums gründet. FRANZ NAHRADA wird diese innovative Idee genauer darstellen und wird dabei zeigen, wie das Konzept des open everything in Gemeinschaften realisiert werden kann, die willens sind Offenheit tagtäglich zu leben.
Gesellschaft wird durch viele Faktoren beeinflusst, Kultur und Technologie sind zwei der entscheidendsten. Die technische Seite von open everything bildet die Basis für eine Kultur der "Macher", die einen Wechsel von Massenproduktion hin zu selbst gemachten oder selbst entworfenen Produkten kennzeichnet. Diese do it yourself (DIY) Kultur ist abhängig von den Verbesserungen, die durch das Teilen von Erfahrungen und Ideen entstehen. RALF SCHLATTERBECK zeigt uns, wie diese Gemeinschaft funktioniert und wie sie vom Ethos des open everything profitiert.
Wann/when: 2009-09-11 19:30 Wo/where: Quartier für digitale Kultur, Quartier 21, Museumsquartier, Museumsplatz 1, 1070 Wien

Cloud computing, Vendor Lock-In and the Future

Tuesday, August 4th, 2009

Cloud Computing is becoming increasingly popular — and it is a danger to your freedom. But we can do something about it.
First, when the term Cloud Computing was introduced, it meant a set of low-level services like virtual machines, databases and file storage. Examples of these are Amazon Elastic Computing Cloud and related services. Since these services are quite low-level, they can be replicated by others, an example is the Eucalyptus project.
This means if you aren’t satisfied with the service one cloud computing provider offers, you either can change the provider or — e.g., using Eucalyptus — roll your own.
But increasingly cloud-computing is a relaunch of the old Software as a Service paradigm under a new name. This means that applications like Textprocessing, Spreadsheets, Wiki, Blog, Voice and Video over IP, collaboration software in general is made available as so-called “Web 2.0″ applications — now called “Cloud Applications” on the web.
When using these services, there is a severe risk of Vendor Lock-In — since the applications may not be available elsewhere, you cannot easily switch the provider. Worse: From some of the Web 2.0 Services like social networks (e.g., Xing, LinkedIn, Facebook) you can’t retrieve your own data. Xing for example has a “mobile export” for data, but this works only for paying customers and only exports address data.
And people have started to realize — e.g., in this facebook group — that multiple incompatible applications — escpecially in the social network sector — puts a large burdon on customers to update multiple personal profiles on multiple sites.
But although it has been noted by the Free Software and Open Source community (e.g., in an interview with Richard Stallman and by Eric S. Raymond in his blog) it has not been widely recognized that cloud computing or software as a service — in particular in the form called “Web 2.0″ — creates a vendor lock-in worse than for proprietary software.
For your social networks this may mean that when you retrieve your data (remember, you helped them build that data!), the social network may throw you out as it happened in that case mentioned by Henry Story and later updated here.
The solution to this problem? Don’t get trapped in a data silo. This may still mean that there can be software as a service offerings. But the software needs to be free (as in free speech). So we can still switch to another provider or decide to host our own service.
But companies won’t do it for us. As Doc Searls notes in Silos End: “These problems cannot be solved by the companies themselves. Companies make silos. It’s as simple as that. Left to their own devices, that’s what they do. Over and over and over again.”
So this can only change if customers make and demand the change. A good rule-of-thumb for software as a service is on the page of the Open Cloud Initiative in the article The four degrees of cloud computing openness. While being a customer of a closed/proprietary cloud with “no access” is clearly a bad idea, open APIs and formats don’t work too well — you don’t have the software to work with your data. So the only valid options that remain are Open APIs, Open Formats and Open Source, and in some cases Open Data.
Still most web applications — like most social network software — are of the completely closed type. There are no open formats and no open APIs. So check your dependencies: What web-applications are you depending on and what is their degree of cloud computing openness?
A word on the license to guarantee openness in cloud-computing. As mentioned in the above-cited interview with Richard Stallman, the GNU General Public License is not enough to keep software in a cloud open. The cloud provider could take the software, make own modifications (which you will depend upon) and not release the modified software to you as a customer. Again you have a vendor lock-in. To prevent this, the GNU Affero General Public License has been designed that prevents closed-source modifications to hosted applications.
Finally, for all sorts of social software — not just social network software but everything that creates more value for more people, usually by linking information — should follow a distributed peer-to-peer approach. We don’t want this data to be a siloed application hosted by a single company. And if there are multiple companies hosting the data we already see the problem with multiple social network providers.
So we need standards and distributed protocols. And the implementation should follow a peer-to-peer approach — like seen in filesharing applications today — to make it resilient to failure and/or take-down orders of hostile bodies (like, e.g., some governments). Lets call this “Web 3.0″.
Examples of such social software are of course the social network sector. We already have a distributed protocol for social networking based on the Friend of a Friend Semantic Web Ontology. With this approach everyone can publish his social networking data and still be in control of who can see what. And the data is under user-control, so it’s possible to remove something.
Another example of social software is probably Money (in the sense of micro- or macro payments in the net). Thomas Greco in the book The End of Money and the Future of Civilization asks for separation of money and the state. A future implementation of money may well be based on a peer-to-peer social software implementation.
These social software needs security solutions. We want to model trust-relationships. Parts of the puzzle are probably OpenID and a newly-proposed scheme by Henry Story called FOAF+SSL mainly used for social networking 3.0 but probably very useful for other social software solutions.
So lets work on solutions for the future.

Warum ich nicht mit Skype telefoniere

Thursday, May 28th, 2009

Nachdem ich immer mal wieder gefragt werde, was meine Skype-ID sei, hier meine Gründe, warum ich Skype nicht verwende:

Die Firma Skype hat früher Peer-to-Peer Filesharing-Software hergestellt (mit dem Namen "KaZaA"), Filesharing-Programme dienen zum Tauschen von Musik und anderen elektronischen Inhalten. Diese Software hat nachgewiesenermassen sogenannte “Spyware” enthalten (vgl. auch diverse Tips, wie man diese ausschalten können soll). Unter Spyware verstehen wir Programme, die unbemerkt vom Eigentümer eines Rechners diesen Rechner ausspioniert und die ausspionierten Daten via Internet an den Programmierer der Spyware schickt. Zu den ausspionierten Daten zählen Statistiken über das Besuchen von Websites bis zu Passwörtern. Was genau die von der KaZaA Spyware ausspionierten Daten sind entzieht sich meiner Kenntnis. Ich vertraue solchen Leuten meine Telefongespräche nicht an.

Es gibt eine unabhängige Analysen von Skype 2005 und 2006, nach der in der analysierten Skype-Version keine Hinweise auf Spyware gefunden wurden. Das kannn sich inzwischen geändert haben und diese Analyse sagt nichts über die Sicherheit von Skype aus:

Skype (und vorher schon KaZaA) enthalten Mechanismen, um automatisch neue Software-Versionen (teilweise ohne Wissen oder sogar Zustimmung des Benutzers) zu installieren. In einer solchen neuen Version könnte Spyware enthalten sein — oder auch nur eine Software-Fehler der vorher nicht enthalten war. Damit ist man den Herstellern der Software ausgeliefert, da es unter der Kontrolle von Skype ist, was in neuen Versionen enthalten sein wird. Man könnte auch sagen: Nach Installation von Skype gehört Dir Dein Computer nicht mehr.

Dann wird immer wieder behauptet, die Kommunikation mit Skype sei verschlüsselt. Das mag ja stimmen. Der Grund ist aber wohl nicht die Privatsphäre des Nutzers, sondern die Absicht, zu verhindern, dass andere Software schreiben, die das Skype-Protokoll spricht. Denn was nützt mir die Verschlüsselung wenn ich nicht weiss, wer den Schlüssel besitzt? Der Benutzer von Skype besitzt den Schlüssel jedenfalls nicht.

Zum Abhören hat Kurt Sauer, Leiter der Sicherheitsabteilung von Skype, auf die durch ZDNet gestellte Frage, ob Skype die Gespräche abhören könne, ausweichend geantwortet: "Wir stellen eine sichere Kommunikationsmöglichkeit zur Verfügung. Ich werde Ihnen nicht sagen, ob wir dabei zuhören können oder nicht." (vgl. den Artikel in der deutschen Wikipedia dazu bzw. direkt das ZDNET-Interview.

Hinzu kommt, dass sich Skype an keinerlei etablierte Standards im Bereich der Sprachkommunikation über Internet-Protokolle hält, ja wie Skype genau funktioniert ist nicht offengelegt, es kann also keine andere Firma derzeit Programme bauen, die mit Skype-Software zusammen funktioniert. Solche "Closed Source" Programme fördern Monopolstellungen und sind — ähnlich wie z.B. Monopolstellungen im Bereich von Nahrungsmitteln wie Genmais von Monsanto — mit erhöhter Wachsamkeit zur Kenntnis zu nehmen. Die etablierten Standards im Bereich der Sprachkommunikation stehen Punkto Sprachqualität u.a. Skype in nichts nach.

Skype hat — aus seiner Peer-to-Peer Vergangenheit — Mechanismen um durch Firewalls zu "tunneln". Diese Techniken, auch als "Firewall Piercing" bekannt, sind für die Sicherheit einer Firma gefährlich, oder wie humorvoll von einem Kollegen formuliert: "Firewall Piercings können sich entzünden und eitern".

Es gibt etablierte Standards zur Sprachkommunikation wie z.B. SIP (Session Initiation Protocol) für den Verbindungsaufbau. Es gibt Open Source Implementierungen für "Softphones", das sind — ähnlich wie Skype — Programme mit welchen über einen Computer telefoniert werden kann. Ein Beispiel ist Qutecom (früher "Wengo Phone"), eine Suche nach "Softphone" in Google sollte noch einige andere zutage fördern. Es gibt natürlich auch kommerzielle Anbieter solcher Programme (teilweise als Closed Source), der Knackpunkt liegt in einem gemeinsamen Protokoll bei dem alle mitmachen können. Es gibt inzwischen auch "Hard" phones, also ein Ding was wie ein Telefon aussieht, aber hinten einen Ethernet-Anschluss hat und SIP spricht. Sehr preiswert ist das Budgetone von Grandstream, ein weiterer Anbieter ist z.B. Snom und Cisco hat einige kleinere Anbieter wie Sipura gekauft.

Ich habe selbst keine grosse Erfahrungen mit solchen Softphones auf Windows oder MAC Plattformen. Für Erfahrungsberichte bin ich dankbar.

Dann gibt es Anbieter, die Vermittlungstätigkeiten für solche Softphones anbieten. Ein Beispiel ist sipgate, andere finden sich auf Man meldet sich dort an, kann gratis mit anderen Softphones über das Internet telefonieren, bekommt bei einigen Anbietern sogar kostenlos eine Telefonnummer über die man vom Festnetz aus angerufen werden kann. Das "Businessmodell" dieser Anbieter sind Anrufe vom Internet ins Festnetz. Die kosten dann etwas, sind aber immer noch deutlich günstiger als z.B. die Telekom in Deutschland oder Österreich.

Ein weiterer SIP-Dienst ist vom Team des gleichnamigen Open Source Soft-Phones Ekiga, ich bin dort z.B. als erreichbar.

Ausserdem ist ein öffentlicher Verzeichnisdienst ENUM im Aufbau, wo man seine eigene Telefonnummer weiterverwenden kann. Damit wird es in Zukunft möglich sein, einfach eine Telefonnummer einzugeben und über das Internet den gewünschten Teilnehmer zu erreichen.

Inzwischen gibt es auch eine Open Source Telefonanlage, Asterisk. Asterisk kann sowohl ans Festnetz (ISDN aber auch eine analoge Leitung) angeschlossen werden, als auch an Internet-Telefonie mit verschiedenen Standards (SIP, IAX, H323) teilnehmen. Die Telefon-Software läuft auf einem ganz normalen handelsüblichen PC — Modelle mit niedrigem Stromverbrauch sind zu empfehlen, da ja eine Telefonanlage Tag und Nacht in Betrieb sein soll. Asterisk "spricht" bereits heute ENUM. Ausserdem kann man über Einsteckkarten ganz normale "analoge" Telefonapparate anschliessen. Dann kann man verschiedene SIP-Anbieter gleichzeitig und einen Festnetzanschluss an der selben Telefonanlage betreiben und mit einem ganz normalen Analogtelefon, oder auch mit einem Komfort-ISDN-Telefon, einem Hard-Phone (z.B. Snom), oder einfach mit einem Softphone — telefonieren. Man kann die Telefonanlage suchen lassen, ob ein bestimmter Teilnehmer über das Internet erreichbar ist oder nur über das Festnetz. Der Anrufende muss nicht mal merken ob über Festnetz oder Internet telefoniert wird.

Das geniale an Asterisk (und das Erfolgsrezept von vielen anderen Open Source Projekten) ist sein modularer Aufbau: Für verschiedene anzuschliessende Geräte oder Protokolle kann man einen "Channel Treiber" schreiben und Asterisk kann danach mit einem neuen Gerät kommunizieren. So kann ein Spezialist für ein bestimmtes Gerät oder Protokoll einen neuen Gerätetreiber beitragen.

Man kann Asterisk-Telefonanlagen miteinander vernetzen — auch über eine verschlüsselte Verbindung über das Internet, ein sogenanntes "Virtual Private Network" (VPN). Dann kann man telefonieren ohne dass Dritte die Verbindung abhören können — eine solche Installation setzt allerdings Absprachen zwischen den Betreibern der zu vernetzenden Telefonanlagen voraus.

Neuere Techniken erlauben, vorhandene SIP-Infrastruktur zu benutzen und trotzdem ohne vorherige Absprache verschlüsselt zu telefonieren. Der Schlüssel wird dabei direkt zwischen den beiden Teilnehmern ausgehandelt. Philip Zimmermann, der Autor von PGP, hat dafür den Standard ZRTP vorgeschlagen, der inzwischen bei der Internet Engineering Task-Force (dem Gremium das Internet-Standards macht) zur Standardisierung eingereicht ist.

Ich selbst verwende Asterisk seit einigen Jahren statt meiner alten ISDN-Telefonanlage.

Open Source Document Licensing

Thursday, March 19th, 2009

I’m currently preparing a technical college lecture. The slides for the lecture should become open source. To reduce my overhead I want to use existing source (mainly pictures) from wikipedia.

Open source licensing should really make it easier to re-use material in other open source projects. As far as I can tell the current mess with different documentation licenses does not achieve that goal.

Sad fact: To understand what is possible with the current licensing is nearly as time-consuming as re-creating the material from scratch. So I’ve chosen to document what I’ve learned here, so others may have a faster learning curve and can contribute their experience.

In addition I hope for comments from people involved in the licensing jungle to comment on my views here.

Typically wikipedia pictures come in three license variants, see the Wikipedia Copyrights page, the german version Wikipedia Lizenzbestimmungen has specific sections on picture use:

Some pictures are dual-licensed under GFDL and CC-BY-SA.

Since the GFDL typically is used with a version-upgrade clause, e.g., "Version 1.2 or any later version published by the Free Software Foundation", upgrade to a later version of the license by the user is possible. This is typically not the case with CC-BY-SA.

I’ve decided that CC-BY-SA version 3.0 best fits my license requirements. The GFDL with its front-cover, back-cover and invariant sections is too complicated and CC-BY-SA is much clearer concerning reuse and remix of the material.

One problem I’m having is that when "performing" my slides (thats the term CC-BY-SA is using for e.g. using the slides in a presentation) I want to use either my company logo or I’m forced to use the logo of the teaching institution I’m working for. So I’ve come up with the following addition to the pointer of the licensing terms:

When performing this work (e.g. teaching using these slides) you may use your company and/or teaching institution logo in the header of each slide without putting the logo under the license above. When distributing derived works, make sure you distribute the document without the company or teaching institution logo.

So I’m specifically allowing to use a logo in the header of each slide when performing. I hope this is compatible with the CC licensing terms.

The next problem I’m facing is reuse of pictures. Pictures licensed under a CC-BY-SA license (also earlier than 2.5) shouldn’t pose a problem, because CC-BY-SA explicitly distinguishes derivative work and collective work. Collective work is defined as (cited from version 2.5 of CC-BY-SA as that is the relevant version for most pictures on Wikipedia):

"Collective Work" means a work, such as a periodical issue, anthology or encyclopedia, in which the Work in its entirety in unmodified form, along with a number of other contributions, constituting separate and independent works in themselves, are assembled into a collective whole. A work that constitutes a Collective Work will not be considered a Derivative Work (as defined below) for the purposes of this License.

So I guess my use of the unmodified pictures in slides is collective work not derivative work. That means I can use CC-BY-SA pictures from wikipedia in a CC-BY-SA document that uses these pictures similar to the usage of pictures in Wikipedia articles, even if the version of the CC-BY-SA license is not the same.

The question if I can use pictures licensed unter GFDL in my slides licensed under CC-BY-SA is still not fully clear for me. Since the pictures typically contain the license-version upgrade clause mentioned above, I could use version 1.3 of the GFDL that includes permission to relicense the work under the CC-BY-SA license under specific circumstances — but my interpretation of that clause allows this only for Wikipedia, not for me as a user of the content on Wikipedia.

Putting my work under a dual-license (CC-BY-SA + GFDL) is also not a solution because this effectively constitutes relicensing of the used content.

So the question remains if I can use GFDL pictures in CC-BY-SA slides and if this is permitted by the GFDL. The GFDL has one paragraph (7) on "aggregation with independent works":

A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation’s users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works
of the Document.

So, hmm, are my slides a "compilation with other separate and independent documents or works" — probably yes. Are they in a "in or on a volume of a storage or distribution medium"? Hard to say. My "copyright resulting from the compilation [provided it is a compilation in the sense of GFDL] is not used to limit the legal rights of the compilation’s users beyond what the individual works permit". So I guess I can use these pictures without the GFDL applying to my document (I want to use the CC-BY-SA).

Thats my due diligence investigation before using this material.

But I’m not a lawyer.

Die vier Grundfreiheiten der Gnu General Public License

Tuesday, December 9th, 2008

Weil ich immer mal wieder gefragt werde, was die Hauptmerkmale von Open Source Lizenzen, insbesondere der Gnu General Public License (GPL) [Deutsche Übersetzung hier] sind, eine ganz gute (deutsche) Erklärung steht in einer Gerichtseingabe von einem Anwaltsbüro, ich gebe das hier wieder:

[Die GPL ist ein Lizenzsystem,] das den Hauptzweck verfolgt, eine möglichst große Verbreitung einer bestimmten Software zu ermöglichen und ausdrücklich zuzulassen, dass Veränderungen an dieser Software vorgenommen werden. Dies aus der Intention, dass sich eine Software stetig dadurch verbessert, dass sie von jedem, der Probleme der Software erkennt, optimiert werden kann.

Die GPL basiert auf den vier sogenannten “Grundfreiheiten” die die Basis der GPL bilden. Diese Grundfreiheiten sind Bestandteil der Lizenz.

Die erste Grundfreiheit gewährt dem Nutzer das Recht, das Programm ohne jede Einschränkung für jeden Zweck zu nutzen. Ausdrücklich ist hierbei auch die kommerzielle Nutzung von dem Nutzungsrecht umfasst.

Die zweite Grundfreiheit wiederum gewährt dem Nutzer das Recht, unter bestimmten Einschränkungen hinsichtlich des sogenannten Quellcodes [...] das Programm zu verbreiten, also kostenlos oder gegen Entgeld zu kopieren und in den Verkehr zu bringen. Nicht erlaubt ist lediglich, Lizenzgebühren für die Nutzung des Programms zu erheben.

Die dritte Grundfreiheit besagt, dass das Programm studiert und den eigenen Bedürfnissen angepasst werden darf.

Letztlich besagt die vierte Grundfreiheit, dass auch die veränderten Versionen des Programms unter den Voraussetzungen der vorgenannten Regelungen in Verkehr gebracht werden können.

Die GPL ist damit eine Sicherheit für den Nutzer: Niemand — auch der ursprüngliche Hersteller der Software — kann die Nutzungsrechte einschränken. Kein “Vendor-lock-in” keine versteckten Kosten in Form von Lizenzgebühren. Sollte der ursprüngliche Anbieter für Wartung zu teuer werden, kann jederzeit der Anbieter gewechselt werden.

Die Software darf (sogar für kommerzielle Zwecke) auch weitergegeben werden — lediglich die Rechte, die man selbst an der Software hat, darf man dem Empfänger nicht vorenthalten.