GFDL concerns: Difference between revisions
Hoof Hearted (talk | contribs) m (→Guaranteeing public access in perpetuity: better link target) |
Hoof Hearted (talk | contribs) m (→See also / external links: tweak to cat display order) |
||
Line 55: | Line 55: | ||
*{{Wp|Wikipedia:Mirrors and forks}} — at English Wikipedia | *{{Wp|Wikipedia:Mirrors and forks}} — at English Wikipedia | ||
[[Category:Free content| ]] | |||
[[Category:GNU license| ]] | [[Category:GNU license| ]] | ||
[[Category:Wiki GNU Free Documentation License| ]] | [[Category:Wiki GNU Free Documentation License| ]] | ||
[[Category:Terms]] | [[Category:Terms]] |
Revision as of 18:04, 29 March 2022
Template:TOCright GFDL concerns — the idea behind GNU Free Documentation License (GFDL, GNU FDL) is:– to make material public, freely available, in a legalistic, enforceable way. The purpose of copyright, as manifested in the United States of America (USA) currently, is to restrict usage, to give the owner maximal control in perpetuity. GFDL is anti-copyright, an attempt to use the legal system and intellectual traditions to create a new class of material that is explicitly owned by us all in common, in perpetuity. Derived works of many sorts are supposed to be facilitated, with the proviso that they all must be made equally free and available for future derived uses.
But there seem to be serious problems with carrying out this general concept for wikis, with little sign that the problems are being addressed, either by those using GFDL; or by the creators of the GFDL. Are the creators learning from how things are working out and trying to fix the problems?
Audit trail
The main problems with GFDL GNU license stem from the fact that it does not just create one giant pool of undifferentiated GFDL material. If it did, all GFDL users could simply keep all GFDL material properly segregated. But GFDL contemplates an audit trail, with regard to the history, authors, dedication annotations, etc of the material. A simple concept. More feasible with contemporary computers than previously. But still almost impossible to actually implement fully.
Currently, many websites take GFDL material without properly acknowledging it. This is being addressed, e.g. by Wikipedia sending legalistic notices to abusers. Many websites take GFDL material and make some efforts to acknowledge it, in many cases not in as much detail as perhaps they should. But no websites seem to take such material and keep track of it in as much detail as the concept actually requires. The nub of the problem is successive copying. Sites that take GFDL material merely point to the source, adequate as long as the source is intact and available, and as long as the material is not passed on. But if a series of a dozen sites pass the material on to the next, and each one simply provides some references to the previous immediate source, the whole concept falls apart, as a practical matter, in terms of having a definite, dependable audit trail to the original authors, history, and dedications. And if the material is substantially modified, at each intervening website, the challenges multiply.
But even that case is relatively minor. It would be possible, both conceptually and technically, to actually keep passing on an ever more complex associated history package, even if that meant in most cases that the auxiliary material was orders of magnitude larger than the main content.
The more serious practical and conceptual challenge is that there is not simply a single linear history of a single document at issue, nor even a simple forking tree. The true situation is an unlimited number of documents, mixing and matching, melding and dividing, in infinite combinations. There is simply no way to keep track of this, short of having a central database containing every single keystroke by every human, in combination with the then-current state of every document.
Wikis do not prohibit manual copying of GFDL material from one document to another within that wiki, nor do they provide a mechanism for automatically maintaining a trail to the source article.
So, while it is right and proper for GFDL recipients to acknowledge their (immediate) sources, as a gesture of thanks, it is quite unrealistic for authors to expect that their individual contributions will be tracked in perpetuity. (See the case of user Panic2k4 on Wikibooks.)
The mixing of GFDL with other copylefts, such as various versions of Creative Commons (CC), multiplies the problems.
Retraction / deletion / rewriting history
Another emerging problem, particularly at Wikipedia, with no obvious solution, is the concept that GNU Free Documentation License (GFDL) can never be taken back. The whole idea is that the contributors are guaranteed the public availability of their contributions, in perpetuity.
But once the level of activity in a wiki gets large enough, contributed by millions of sources, a large amount of material is published as GFDL in error. It might be obscene, or libelous, or simply taken inappropriately from a copyrighted source. Making such judgements is a tremendous burden on the infrastructure of the wiki. At first, there may be a tendency to ignore such problems. Later, there may be a tendency to over-react, to simply delete material that might be problematic. But if the material seems to be unacceptable, but is not truly unacceptable, then the GFDL is violated by withdrawing it. But how can anyone or any organization be sure? And in some cases, simply editing the current version is not enough; the organization may deem it advisable or necessary to delete all previous versions of the material. The organization may even feel so threatened by the material (and the process) as to feel a need to totally delete any records relating to the matter, any indication that the material ever existed at all.
At the same time, the GFDL would seem to guarantee that once published, even for an instant, others can make copies of the material that later is treated as problematic by the source, and deleted by the source. But the copier is relying on the GFDL classification of the material, and continues to publish the material, pass it on to others, and point to the original source – which now wishes to be completely disassociated from the material. The source may wish to require all recipients to permanently and completely expunge copied material, but how can such a concept actually be implemented, and should it be?
Guaranteeing public access in perpetuity
Another problem is the permanence of the retention and public access to the material. The concept that participating in GNU Free Documentation License (GFDL) projects entices contributors with is, your work will become part of the common heritage of mankind. And in the case of a big project like Wikipedia, with many mirrors, and an openly published unified download, with and without history details, this hope appears to have a firm basis. But at the other extreme are small wikis run by a single individual with no technical expertise, who simply relies on the website host to make internal backups. The material could easily be lost. The GFDL merely guarantees that the public could make backup copies, but does not guarantee than anyone actually does, or that such copies will actually be made public in case the original source disappears.
A related problem is that although the GFDL incorporates the concept of public, practical access to transparent computer files of the material, there does not seem to be a specific requirement to publish a unified database file of the entire GFDL material on a website, and few sites other than Wikipedia offer such a public download. So, although a GFDL webpage may in theory be copiable, few would wish to hassle with spidering an entire website to collect all of the GFDL material, and most websites would take technical measures to slow or thwart such spidering. In cases where Creative Commons (CC), GFDL, and other material is intermixed on a website, it might be impossible to automatically gather just the GFDL material externally.
This problem at least could be solved. There could be standardized ways to evaluate what third parties are archiving the material, how often, in ways that guarantee continued public access, regardless of what happens to the source. All wikis could be expected to divulge their internal power structure, how many individuals have direct control of the infrastructure. All wiki farms could be expected to provide, and guarantee independent, external backup of GFDL material.
Future versions of the GFDL could, and should require that each collection of over 100 GFDL electronic documents offer clean, unified public downloads of large database files. As long as all wikis did this, it would become much easier for the public to know whether any third parties were making and keeping copies of such material, and whether the files did indeed properly contain what they should.
Specialized repositories could be developed for archives of such files. The Internet Archive WaybackMachine at Archive.org could play a special role by entering into public agreements with various sites.
See also / external links
- Category: Wiki GNU Free Documentation License — for those wikis catalogued here on WikiIndex using the GFDL
- Template:GFDL — for tagging GFDL licensed images uploaded here on to WikiIndex
- GNU Free Documentation License Version 1.2, November 2002 — at GNU.org
- GNU Free Documentation License article at English Wikipedia
- Talk:GNU Free Documentation License — at English Wikipedia
- Why you shouldn't use the GNU FDL — by Nathanael Nerode, 2003 (CC-PD)
- GNU Simpler Free Documentation License — SFDLv1: Discussion Draft 1 of Version 1, 25 September 2006, at FSF.org
- GNU Simpler Free Documentation License — at English Wikipedia
- Copyleft — at English Wikipedia
- Anti-copyright — at English Wikipedia
- Creative Commons licenses — at English Wikipedia
- Collaborative writing — at English Wikipedia
- Wikipedia:Copyrights — at English Wikipedia
- Wikipedia:Mirrors and forks — at English Wikipedia