--- /dev/null
+ GNU GENERAL PUBLIC LICENSE
+ Version 3, 29 June 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ Preamble
+
+ The GNU General Public License is a free, copyleft license for
+software and other kinds of works.
+
+ The licenses for most software and other practical works are designed
+to take away your freedom to share and change the works. By contrast,
+the GNU General Public License is intended to guarantee your freedom to
+share and change all versions of a program--to make sure it remains free
+software for all its users. We, the Free Software Foundation, use the
+GNU General Public License for most of our software; it applies also to
+any other work released this way by its authors. You can apply it to
+your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+them if you wish), that you receive source code or can get it if you
+want it, that you can change the software or use pieces of it in new
+free programs, and that you know you can do these things.
+
+ To protect your rights, we need to prevent others from denying you
+these rights or asking you to surrender the rights. Therefore, you have
+certain responsibilities if you distribute copies of the software, or if
+you modify it: responsibilities to respect the freedom of others.
+
+ For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must pass on to the recipients the same
+freedoms that you received. You must make sure that they, too, receive
+or can get the source code. And you must show them these terms so they
+know their rights.
+
+ Developers that use the GNU GPL protect your rights with two steps:
+(1) assert copyright on the software, and (2) offer you this License
+giving you legal permission to copy, distribute and/or modify it.
+
+ For the developers' and authors' protection, the GPL clearly explains
+that there is no warranty for this free software. For both users' and
+authors' sake, the GPL requires that modified versions be marked as
+changed, so that their problems will not be attributed erroneously to
+authors of previous versions.
+
+ Some devices are designed to deny users access to install or run
+modified versions of the software inside them, although the manufacturer
+can do so. This is fundamentally incompatible with the aim of
+protecting users' freedom to change the software. The systematic
+pattern of such abuse occurs in the area of products for individuals to
+use, which is precisely where it is most unacceptable. Therefore, we
+have designed this version of the GPL to prohibit the practice for those
+products. If such problems arise substantially in other domains, we
+stand ready to extend this provision to those domains in future versions
+of the GPL, as needed to protect the freedom of users.
+
+ Finally, every program is threatened constantly by software patents.
+States should not allow patents to restrict development and use of
+software on general-purpose computers, but in those that do, we wish to
+avoid the special danger that patents applied to a free program could
+make it effectively proprietary. To prevent this, the GPL assures that
+patents cannot be used to render the program non-free.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ TERMS AND CONDITIONS
+
+ 0. Definitions.
+
+ "This License" refers to version 3 of the GNU General Public License.
+
+ "Copyright" also means copyright-like laws that apply to other kinds of
+works, such as semiconductor masks.
+
+ "The Program" refers to any copyrightable work licensed under this
+License. Each licensee is addressed as "you". "Licensees" and
+"recipients" may be individuals or organizations.
+
+ To "modify" a work means to copy from or adapt all or part of the work
+in a fashion requiring copyright permission, other than the making of an
+exact copy. The resulting work is called a "modified version" of the
+earlier work or a work "based on" the earlier work.
+
+ A "covered work" means either the unmodified Program or a work based
+on the Program.
+
+ To "propagate" a work means to do anything with it that, without
+permission, would make you directly or secondarily liable for
+infringement under applicable copyright law, except executing it on a
+computer or modifying a private copy. Propagation includes copying,
+distribution (with or without modification), making available to the
+public, and in some countries other activities as well.
+
+ To "convey" a work means any kind of propagation that enables other
+parties to make or receive copies. Mere interaction with a user through
+a computer network, with no transfer of a copy, is not conveying.
+
+ An interactive user interface displays "Appropriate Legal Notices"
+to the extent that it includes a convenient and prominently visible
+feature that (1) displays an appropriate copyright notice, and (2)
+tells the user that there is no warranty for the work (except to the
+extent that warranties are provided), that licensees may convey the
+work under this License, and how to view a copy of this License. If
+the interface presents a list of user commands or options, such as a
+menu, a prominent item in the list meets this criterion.
+
+ 1. Source Code.
+
+ The "source code" for a work means the preferred form of the work
+for making modifications to it. "Object code" means any non-source
+form of a work.
+
+ A "Standard Interface" means an interface that either is an official
+standard defined by a recognized standards body, or, in the case of
+interfaces specified for a particular programming language, one that
+is widely used among developers working in that language.
+
+ The "System Libraries" of an executable work include anything, other
+than the work as a whole, that (a) is included in the normal form of
+packaging a Major Component, but which is not part of that Major
+Component, and (b) serves only to enable use of the work with that
+Major Component, or to implement a Standard Interface for which an
+implementation is available to the public in source code form. A
+"Major Component", in this context, means a major essential component
+(kernel, window system, and so on) of the specific operating system
+(if any) on which the executable work runs, or a compiler used to
+produce the work, or an object code interpreter used to run it.
+
+ The "Corresponding Source" for a work in object code form means all
+the source code needed to generate, install, and (for an executable
+work) run the object code and to modify the work, including scripts to
+control those activities. However, it does not include the work's
+System Libraries, or general-purpose tools or generally available free
+programs which are used unmodified in performing those activities but
+which are not part of the work. For example, Corresponding Source
+includes interface definition files associated with source files for
+the work, and the source code for shared libraries and dynamically
+linked subprograms that the work is specifically designed to require,
+such as by intimate data communication or control flow between those
+subprograms and other parts of the work.
+
+ The Corresponding Source need not include anything that users
+can regenerate automatically from other parts of the Corresponding
+Source.
+
+ The Corresponding Source for a work in source code form is that
+same work.
+
+ 2. Basic Permissions.
+
+ All rights granted under this License are granted for the term of
+copyright on the Program, and are irrevocable provided the stated
+conditions are met. This License explicitly affirms your unlimited
+permission to run the unmodified Program. The output from running a
+covered work is covered by this License only if the output, given its
+content, constitutes a covered work. This License acknowledges your
+rights of fair use or other equivalent, as provided by copyright law.
+
+ You may make, run and propagate covered works that you do not
+convey, without conditions so long as your license otherwise remains
+in force. You may convey covered works to others for the sole purpose
+of having them make modifications exclusively for you, or provide you
+with facilities for running those works, provided that you comply with
+the terms of this License in conveying all material for which you do
+not control copyright. Those thus making or running the covered works
+for you must do so exclusively on your behalf, under your direction
+and control, on terms that prohibit them from making any copies of
+your copyrighted material outside their relationship with you.
+
+ Conveying under any other circumstances is permitted solely under
+the conditions stated below. Sublicensing is not allowed; section 10
+makes it unnecessary.
+
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
+
+ No covered work shall be deemed part of an effective technological
+measure under any applicable law fulfilling obligations under article
+11 of the WIPO copyright treaty adopted on 20 December 1996, or
+similar laws prohibiting or restricting circumvention of such
+measures.
+
+ When you convey a covered work, you waive any legal power to forbid
+circumvention of technological measures to the extent such circumvention
+is effected by exercising rights under this License with respect to
+the covered work, and you disclaim any intention to limit operation or
+modification of the work as a means of enforcing, against the work's
+users, your or third parties' legal rights to forbid circumvention of
+technological measures.
+
+ 4. Conveying Verbatim Copies.
+
+ You may convey verbatim copies of the Program's source code as you
+receive it, in any medium, provided that you conspicuously and
+appropriately publish on each copy an appropriate copyright notice;
+keep intact all notices stating that this License and any
+non-permissive terms added in accord with section 7 apply to the code;
+keep intact all notices of the absence of any warranty; and give all
+recipients a copy of this License along with the Program.
+
+ You may charge any price or no price for each copy that you convey,
+and you may offer support or warranty protection for a fee.
+
+ 5. Conveying Modified Source Versions.
+
+ You may convey a work based on the Program, or the modifications to
+produce it from the Program, in the form of source code under the
+terms of section 4, provided that you also meet all of these conditions:
+
+ a) The work must carry prominent notices stating that you modified
+ it, and giving a relevant date.
+
+ b) The work must carry prominent notices stating that it is
+ released under this License and any conditions added under section
+ 7. This requirement modifies the requirement in section 4 to
+ "keep intact all notices".
+
+ c) You must license the entire work, as a whole, under this
+ License to anyone who comes into possession of a copy. This
+ License will therefore apply, along with any applicable section 7
+ additional terms, to the whole of the work, and all its parts,
+ regardless of how they are packaged. This License gives no
+ permission to license the work in any other way, but it does not
+ invalidate such permission if you have separately received it.
+
+ d) If the work has interactive user interfaces, each must display
+ Appropriate Legal Notices; however, if the Program has interactive
+ interfaces that do not display Appropriate Legal Notices, your
+ work need not make them do so.
+
+ A compilation of a covered work with other separate and independent
+works, which are not by their nature extensions of the covered work,
+and which are not combined with it such as to form a larger program,
+in or on a volume of a storage or distribution medium, is called an
+"aggregate" if the compilation and its resulting copyright are not
+used to limit the access or legal rights of the compilation's users
+beyond what the individual works permit. Inclusion of a covered work
+in an aggregate does not cause this License to apply to the other
+parts of the aggregate.
+
+ 6. Conveying Non-Source Forms.
+
+ You may convey a covered work in object code form under the terms
+of sections 4 and 5, provided that you also convey the
+machine-readable Corresponding Source under the terms of this License,
+in one of these ways:
+
+ a) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by the
+ Corresponding Source fixed on a durable physical medium
+ customarily used for software interchange.
+
+ b) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by a
+ written offer, valid for at least three years and valid for as
+ long as you offer spare parts or customer support for that product
+ model, to give anyone who possesses the object code either (1) a
+ copy of the Corresponding Source for all the software in the
+ product that is covered by this License, on a durable physical
+ medium customarily used for software interchange, for a price no
+ more than your reasonable cost of physically performing this
+ conveying of source, or (2) access to copy the
+ Corresponding Source from a network server at no charge.
+
+ c) Convey individual copies of the object code with a copy of the
+ written offer to provide the Corresponding Source. This
+ alternative is allowed only occasionally and noncommercially, and
+ only if you received the object code with such an offer, in accord
+ with subsection 6b.
+
+ d) Convey the object code by offering access from a designated
+ place (gratis or for a charge), and offer equivalent access to the
+ Corresponding Source in the same way through the same place at no
+ further charge. You need not require recipients to copy the
+ Corresponding Source along with the object code. If the place to
+ copy the object code is a network server, the Corresponding Source
+ may be on a different server (operated by you or a third party)
+ that supports equivalent copying facilities, provided you maintain
+ clear directions next to the object code saying where to find the
+ Corresponding Source. Regardless of what server hosts the
+ Corresponding Source, you remain obligated to ensure that it is
+ available for as long as needed to satisfy these requirements.
+
+ e) Convey the object code using peer-to-peer transmission, provided
+ you inform other peers where the object code and Corresponding
+ Source of the work are being offered to the general public at no
+ charge under subsection 6d.
+
+ A separable portion of the object code, whose source code is excluded
+from the Corresponding Source as a System Library, need not be
+included in conveying the object code work.
+
+ A "User Product" is either (1) a "consumer product", which means any
+tangible personal property which is normally used for personal, family,
+or household purposes, or (2) anything designed or sold for incorporation
+into a dwelling. In determining whether a product is a consumer product,
+doubtful cases shall be resolved in favor of coverage. For a particular
+product received by a particular user, "normally used" refers to a
+typical or common use of that class of product, regardless of the status
+of the particular user or of the way in which the particular user
+actually uses, or expects or is expected to use, the product. A product
+is a consumer product regardless of whether the product has substantial
+commercial, industrial or non-consumer uses, unless such uses represent
+the only significant mode of use of the product.
+
+ "Installation Information" for a User Product means any methods,
+procedures, authorization keys, or other information required to install
+and execute modified versions of a covered work in that User Product from
+a modified version of its Corresponding Source. The information must
+suffice to ensure that the continued functioning of the modified object
+code is in no case prevented or interfered with solely because
+modification has been made.
+
+ If you convey an object code work under this section in, or with, or
+specifically for use in, a User Product, and the conveying occurs as
+part of a transaction in which the right of possession and use of the
+User Product is transferred to the recipient in perpetuity or for a
+fixed term (regardless of how the transaction is characterized), the
+Corresponding Source conveyed under this section must be accompanied
+by the Installation Information. But this requirement does not apply
+if neither you nor any third party retains the ability to install
+modified object code on the User Product (for example, the work has
+been installed in ROM).
+
+ The requirement to provide Installation Information does not include a
+requirement to continue to provide support service, warranty, or updates
+for a work that has been modified or installed by the recipient, or for
+the User Product in which it has been modified or installed. Access to a
+network may be denied when the modification itself materially and
+adversely affects the operation of the network or violates the rules and
+protocols for communication across the network.
+
+ Corresponding Source conveyed, and Installation Information provided,
+in accord with this section must be in a format that is publicly
+documented (and with an implementation available to the public in
+source code form), and must require no special password or key for
+unpacking, reading or copying.
+
+ 7. Additional Terms.
+
+ "Additional permissions" are terms that supplement the terms of this
+License by making exceptions from one or more of its conditions.
+Additional permissions that are applicable to the entire Program shall
+be treated as though they were included in this License, to the extent
+that they are valid under applicable law. If additional permissions
+apply only to part of the Program, that part may be used separately
+under those permissions, but the entire Program remains governed by
+this License without regard to the additional permissions.
+
+ When you convey a copy of a covered work, you may at your option
+remove any additional permissions from that copy, or from any part of
+it. (Additional permissions may be written to require their own
+removal in certain cases when you modify the work.) You may place
+additional permissions on material, added by you to a covered work,
+for which you have or can give appropriate copyright permission.
+
+ Notwithstanding any other provision of this License, for material you
+add to a covered work, you may (if authorized by the copyright holders of
+that material) supplement the terms of this License with terms:
+
+ a) Disclaiming warranty or limiting liability differently from the
+ terms of sections 15 and 16 of this License; or
+
+ b) Requiring preservation of specified reasonable legal notices or
+ author attributions in that material or in the Appropriate Legal
+ Notices displayed by works containing it; or
+
+ c) Prohibiting misrepresentation of the origin of that material, or
+ requiring that modified versions of such material be marked in
+ reasonable ways as different from the original version; or
+
+ d) Limiting the use for publicity purposes of names of licensors or
+ authors of the material; or
+
+ e) Declining to grant rights under trademark law for use of some
+ trade names, trademarks, or service marks; or
+
+ f) Requiring indemnification of licensors and authors of that
+ material by anyone who conveys the material (or modified versions of
+ it) with contractual assumptions of liability to the recipient, for
+ any liability that these contractual assumptions directly impose on
+ those licensors and authors.
+
+ All other non-permissive additional terms are considered "further
+restrictions" within the meaning of section 10. If the Program as you
+received it, or any part of it, contains a notice stating that it is
+governed by this License along with a term that is a further
+restriction, you may remove that term. If a license document contains
+a further restriction but permits relicensing or conveying under this
+License, you may add to a covered work material governed by the terms
+of that license document, provided that the further restriction does
+not survive such relicensing or conveying.
+
+ If you add terms to a covered work in accord with this section, you
+must place, in the relevant source files, a statement of the
+additional terms that apply to those files, or a notice indicating
+where to find the applicable terms.
+
+ Additional terms, permissive or non-permissive, may be stated in the
+form of a separately written license, or stated as exceptions;
+the above requirements apply either way.
+
+ 8. Termination.
+
+ You may not propagate or modify a covered work except as expressly
+provided under this License. Any attempt otherwise to propagate or
+modify it is void, and will automatically terminate your rights under
+this License (including any patent licenses granted under the third
+paragraph of section 11).
+
+ However, if you cease all violation of this License, then your
+license from a particular copyright holder is reinstated (a)
+provisionally, unless and until the copyright holder explicitly and
+finally terminates your license, and (b) permanently, if the copyright
+holder fails to notify you of the violation by some reasonable means
+prior to 60 days after the cessation.
+
+ Moreover, your license from a particular copyright holder is
+reinstated permanently if the copyright holder notifies you of the
+violation by some reasonable means, this is the first time you have
+received notice of violation of this License (for any work) from that
+copyright holder, and you cure the violation prior to 30 days after
+your receipt of the notice.
+
+ Termination of your rights under this section does not terminate the
+licenses of parties who have received copies or rights from you under
+this License. If your rights have been terminated and not permanently
+reinstated, you do not qualify to receive new licenses for the same
+material under section 10.
+
+ 9. Acceptance Not Required for Having Copies.
+
+ You are not required to accept this License in order to receive or
+run a copy of the Program. Ancillary propagation of a covered work
+occurring solely as a consequence of using peer-to-peer transmission
+to receive a copy likewise does not require acceptance. However,
+nothing other than this License grants you permission to propagate or
+modify any covered work. These actions infringe copyright if you do
+not accept this License. Therefore, by modifying or propagating a
+covered work, you indicate your acceptance of this License to do so.
+
+ 10. Automatic Licensing of Downstream Recipients.
+
+ Each time you convey a covered work, the recipient automatically
+receives a license from the original licensors, to run, modify and
+propagate that work, subject to this License. You are not responsible
+for enforcing compliance by third parties with this License.
+
+ An "entity transaction" is a transaction transferring control of an
+organization, or substantially all assets of one, or subdividing an
+organization, or merging organizations. If propagation of a covered
+work results from an entity transaction, each party to that
+transaction who receives a copy of the work also receives whatever
+licenses to the work the party's predecessor in interest had or could
+give under the previous paragraph, plus a right to possession of the
+Corresponding Source of the work from the predecessor in interest, if
+the predecessor has it or can get it with reasonable efforts.
+
+ You may not impose any further restrictions on the exercise of the
+rights granted or affirmed under this License. For example, you may
+not impose a license fee, royalty, or other charge for exercise of
+rights granted under this License, and you may not initiate litigation
+(including a cross-claim or counterclaim in a lawsuit) alleging that
+any patent claim is infringed by making, using, selling, offering for
+sale, or importing the Program or any portion of it.
+
+ 11. Patents.
+
+ A "contributor" is a copyright holder who authorizes use under this
+License of the Program or a work on which the Program is based. The
+work thus licensed is called the contributor's "contributor version".
+
+ A contributor's "essential patent claims" are all patent claims
+owned or controlled by the contributor, whether already acquired or
+hereafter acquired, that would be infringed by some manner, permitted
+by this License, of making, using, or selling its contributor version,
+but do not include claims that would be infringed only as a
+consequence of further modification of the contributor version. For
+purposes of this definition, "control" includes the right to grant
+patent sublicenses in a manner consistent with the requirements of
+this License.
+
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
+patent license under the contributor's essential patent claims, to
+make, use, sell, offer for sale, import and otherwise run, modify and
+propagate the contents of its contributor version.
+
+ In the following three paragraphs, a "patent license" is any express
+agreement or commitment, however denominated, not to enforce a patent
+(such as an express permission to practice a patent or covenant not to
+sue for patent infringement). To "grant" such a patent license to a
+party means to make such an agreement or commitment not to enforce a
+patent against the party.
+
+ If you convey a covered work, knowingly relying on a patent license,
+and the Corresponding Source of the work is not available for anyone
+to copy, free of charge and under the terms of this License, through a
+publicly available network server or other readily accessible means,
+then you must either (1) cause the Corresponding Source to be so
+available, or (2) arrange to deprive yourself of the benefit of the
+patent license for this particular work, or (3) arrange, in a manner
+consistent with the requirements of this License, to extend the patent
+license to downstream recipients. "Knowingly relying" means you have
+actual knowledge that, but for the patent license, your conveying the
+covered work in a country, or your recipient's use of the covered work
+in a country, would infringe one or more identifiable patents in that
+country that you have reason to believe are valid.
+
+ If, pursuant to or in connection with a single transaction or
+arrangement, you convey, or propagate by procuring conveyance of, a
+covered work, and grant a patent license to some of the parties
+receiving the covered work authorizing them to use, propagate, modify
+or convey a specific copy of the covered work, then the patent license
+you grant is automatically extended to all recipients of the covered
+work and works based on it.
+
+ A patent license is "discriminatory" if it does not include within
+the scope of its coverage, prohibits the exercise of, or is
+conditioned on the non-exercise of one or more of the rights that are
+specifically granted under this License. You may not convey a covered
+work if you are a party to an arrangement with a third party that is
+in the business of distributing software, under which you make payment
+to the third party based on the extent of your activity of conveying
+the work, and under which the third party grants, to any of the
+parties who would receive the covered work from you, a discriminatory
+patent license (a) in connection with copies of the covered work
+conveyed by you (or copies made from those copies), or (b) primarily
+for and in connection with specific products or compilations that
+contain the covered work, unless you entered into that arrangement,
+or that patent license was granted, prior to 28 March 2007.
+
+ Nothing in this License shall be construed as excluding or limiting
+any implied license or other defenses to infringement that may
+otherwise be available to you under applicable patent law.
+
+ 12. No Surrender of Others' Freedom.
+
+ If conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot convey a
+covered work so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you may
+not convey it at all. For example, if you agree to terms that obligate you
+to collect a royalty for further conveying from those to whom you convey
+the Program, the only way you could satisfy both those terms and this
+License would be to refrain entirely from conveying the Program.
+
+ 13. Use with the GNU Affero General Public License.
+
+ Notwithstanding any other provision of this License, you have
+permission to link or combine any covered work with a work licensed
+under version 3 of the GNU Affero General Public License into a single
+combined work, and to convey the resulting work. The terms of this
+License will continue to apply to the part which is the covered work,
+but the special requirements of the GNU Affero General Public License,
+section 13, concerning interaction through a network will apply to the
+combination as such.
+
+ 14. Revised Versions of this License.
+
+ The Free Software Foundation may publish revised and/or new versions of
+the GNU General Public License from time to time. Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the
+Program specifies that a certain numbered version of the GNU General
+Public License "or any later version" applies to it, you have the
+option of following the terms and conditions either of that numbered
+version or of any later version published by the Free Software
+Foundation. If the Program does not specify a version number of the
+GNU General Public License, you may choose any version ever published
+by the Free Software Foundation.
+
+ If the Program specifies that a proxy can decide which future
+versions of the GNU General Public License can be used, that proxy's
+public statement of acceptance of a version permanently authorizes you
+to choose that version for the Program.
+
+ Later license versions may give you additional or different
+permissions. However, no additional obligations are imposed on any
+author or copyright holder as a result of your choosing to follow a
+later version.
+
+ 15. Disclaimer of Warranty.
+
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
+APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
+OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
+THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
+IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
+ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+ 16. Limitation of Liability.
+
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
+THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
+GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
+USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
+DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
+PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
+EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
+SUCH DAMAGES.
+
+ 17. Interpretation of Sections 15 and 16.
+
+ If the disclaimer of warranty and limitation of liability provided
+above cannot be given local legal effect according to their terms,
+reviewing courts shall apply local law that most closely approximates
+an absolute waiver of all civil liability in connection with the
+Program, unless a warranty or assumption of liability accompanies a
+copy of the Program in return for a fee.
+
+ END OF TERMS AND CONDITIONS
+
+ How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+state the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+ <one line to give the program's name and a brief idea of what it does.>
+ Copyright (C) <year> <name of author>
+
+ This program is free software: you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation, either version 3 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+Also add information on how to contact you by electronic and paper mail.
+
+ If the program does terminal interaction, make it output a short
+notice like this when it starts in an interactive mode:
+
+ <program> Copyright (C) <year> <name of author>
+ This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+ This is free software, and you are welcome to redistribute it
+ under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License. Of course, your program's commands
+might be different; for a GUI interface, you would use an "about box".
+
+ You should also get your employer (if you work as a programmer) or school,
+if any, to sign a "copyright disclaimer" for the program, if necessary.
+For more information on this, and how to apply and follow the GNU GPL, see
+<http://www.gnu.org/licenses/>.
+
+ The GNU General Public License does not permit incorporating your program
+into proprietary programs. If your program is a subroutine library, you
+may consider it more useful to permit linking proprietary applications with
+the library. If this is what you want to do, use the GNU Lesser General
+Public License instead of this License. But first, please read
+<http://www.gnu.org/philosophy/why-not-lgpl.html>.
--- /dev/null
+Thu Sep 18 10:03:02 NZST 2008 bryan@ischo.com
+ * This file is no longer maintained, sorry
+
+Sat Aug 9 13:44:21 NZST 2008 bryan@ischo.com
+ * Fixed bug wherein keys with non-URI-safe characters did not work
+ correctly because they were not being URI-encoded in the request UR
+ * Split RPM and DEB packages into normal and devel packages
+
+Fri Aug 8 22:40:19 NZST 2008 bryan@ischo.com
+ * Branched 0.4
+ * Created RPM and Debian packaging
+
+Tue Aug 5 08:52:33 NZST 2008 bryan@ischo.com
+ * Bumped version number to 0.3
+ * Moved Makefile to GNUmakefile, added shared library build
+ * Added a bunch of GNU standard files (README, INSTALL, ChangeLog, etc)
--- /dev/null
+# GNUmakefile
+#
+# Copyright 2008 Bryan Ischo <bryan@ischo.com>
+#
+# This file is part of libs3.
+#
+# libs3 is free software: you can redistribute it and/or modify it under the
+# terms of the GNU General Public License as published by the Free Software
+# Foundation, version 3 of the License.
+#
+# In addition, as a special exception, the copyright holders give
+# permission to link the code of this library and its programs with the
+# OpenSSL library, and distribute linked combinations including the two.
+#
+# libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License version 3
+# along with libs3, in a file named COPYING. If not, see
+# <http://www.gnu.org/licenses/>.
+
+# I tried to use the autoconf/automake/autolocal/etc (i.e. autohell) tools
+# but I just couldn't stomach them. Since this is a Makefile for POSIX
+# systems, I will simply do away with autohell completely and use a GNU
+# Makefile. GNU make ought to be available pretty much everywhere, so I
+# don't see this being a significant issue for portability.
+
+# All commands assume a GNU compiler. For systems which do not use a GNU
+# compiler, write scripts with the same names as these commands, and taking
+# the same arguments, and translate the arguments and commands into the
+# appropriate non-POSIX ones as needed. libs3 assumes a GNU toolchain as
+# the most portable way to build software possible. Non-POSIX, non-GNU
+# systems can do the work of supporting this build infrastructure.
+
+
+# --------------------------------------------------------------------------
+# Set libs3 version number
+
+LIBS3_VER_MAJOR := 1
+LIBS3_VER_MINOR := 4
+LIBS3_VER := $(LIBS3_VER_MAJOR).$(LIBS3_VER_MINOR)
+
+
+# --------------------------------------------------------------------------
+# BUILD directory
+ifndef BUILD
+ BUILD := build
+endif
+
+
+# --------------------------------------------------------------------------
+# DESTDIR directory
+ifndef DESTDIR
+ DESTDIR := /usr
+endif
+
+
+# --------------------------------------------------------------------------
+# Acquire configuration information for libraries that libs3 depends upon
+
+ifndef CURL_LIBS
+ CURL_LIBS := $(shell curl-config --libs)
+endif
+
+ifndef CURL_CFLAGS
+ CURL_CFLAGS := $(shell curl-config --cflags)
+endif
+
+ifndef LIBXML2_LIBS
+ LIBXML2_LIBS := $(shell xml2-config --libs)
+endif
+
+ifndef LIBXML2_CFLAGS
+ LIBXML2_CFLAGS := $(shell xml2-config --cflags)
+endif
+
+
+# --------------------------------------------------------------------------
+# These CFLAGS assume a GNU compiler. For other compilers, write a script
+# which converts these arguments into their equivalent for that particular
+# compiler.
+
+ifndef CFLAGS
+ CFLAGS = -O3
+endif
+
+CFLAGS += -Wall -Werror -Wshadow -Wextra -Iinc \
+ $(CURL_CFLAGS) $(LIBXML2_CFLAGS) \
+ -DLIBS3_VER_MAJOR=\"$(LIBS3_VER_MAJOR)\" \
+ -DLIBS3_VER_MINOR=\"$(LIBS3_VER_MINOR)\" \
+ -DLIBS3_VER=\"$(LIBS3_VER)\" \
+ -D__STRICT_ANSI__ \
+ -D_ISOC99_SOURCE \
+ -D_POSIX_C_SOURCE=200112L
+
+LDFLAGS = $(CURL_LIBS) $(LIBXML2_LIBS) -lpthread
+
+
+# --------------------------------------------------------------------------
+# Default targets are everything
+
+.PHONY: all
+all: exported test
+
+
+# --------------------------------------------------------------------------
+# Exported targets are the library and driver program
+
+.PHONY: exported
+exported: libs3 s3 headers
+
+
+# --------------------------------------------------------------------------
+# Install target
+
+.PHONY: install
+install: exported
+ install -Dps -m u+rwx,go+rx $(BUILD)/bin/s3 $(DESTDIR)/bin/s3
+ install -Dp -m u+rw,go+r $(BUILD)/include/libs3.h \
+ $(DESTDIR)/include/libs3.h
+ install -Dp -m u+rw,go+r $(BUILD)/lib/libs3.a $(DESTDIR)/lib/libs3.a
+ install -Dps -m u+rw,go+r $(BUILD)/lib/libs3.so.$(LIBS3_VER_MAJOR) \
+ $(DESTDIR)/lib/libs3.so.$(LIBS3_VER)
+ ln -sf libs3.so.$(LIBS3_VER) $(DESTDIR)/lib/libs3.so.$(LIBS3_VER_MAJOR)
+ ln -sf libs3.so.$(LIBS3_VER_MAJOR) $(DESTDIR)/lib/libs3.so
+
+
+# --------------------------------------------------------------------------
+# Uninstall target
+
+.PHONY: uninstall
+uninstall:
+ rm -f $(DESTDIR)/bin/s3 \
+ $(DESTDIR)/include/libs3.h \
+ $(DESTDIR)/lib/libs3.a \
+ $(DESTDIR)/lib/libs3.so \
+ $(DESTDIR)/lib/libs3.so.$(LIBS3_VER_MAJOR) \
+ $(DESTDIR)/lib/libs3.so.$(LIBS3_VER) \
+
+
+# --------------------------------------------------------------------------
+# Debian package target
+
+DEBPKG = $(BUILD)/pkg/libs3_$(LIBS3_VER).deb
+DEBDEVPKG = $(BUILD)/pkg/libs3-dev_$(LIBS3_VER).deb
+
+.PHONY: deb
+deb: $(DEBPKG) $(DEBDEVPKG)
+
+$(DEBPKG): DEBARCH = $(shell dpkg-architecture | grep ^DEB_BUILD_ARCH= | \
+ cut -d '=' -f 2)
+$(DEBPKG): exported $(BUILD)/deb/DEBIAN/control $(BUILD)/deb/DEBIAN/shlibs \
+ $(BUILD)/deb/DEBIAN/postinst \
+ $(BUILD)/deb/usr/share/doc/libs3/changelog.gz \
+ $(BUILD)/deb/usr/share/doc/libs3/changelog.Debian.gz \
+ $(BUILD)/deb/usr/share/doc/libs3/copyright
+ DESTDIR=$(BUILD)/deb/usr $(MAKE) install
+ rm -rf $(BUILD)/deb/usr/include
+ rm -f $(BUILD)/deb/usr/lib/libs3.a
+ @mkdir -p $(dir $@)
+ fakeroot dpkg-deb -b $(BUILD)/deb $@
+ mv $@ $(BUILD)/pkg/libs3_$(LIBS3_VER)_$(DEBARCH).deb
+
+$(DEBDEVPKG): DEBARCH = $(shell dpkg-architecture | grep ^DEB_BUILD_ARCH= | \
+ cut -d '=' -f 2)
+$(DEBDEVPKG): exported $(BUILD)/deb-dev/DEBIAN/control \
+ $(BUILD)/deb-dev/usr/share/doc/libs3-dev/changelog.gz \
+ $(BUILD)/deb-dev/usr/share/doc/libs3-dev/changelog.Debian.gz \
+ $(BUILD)/deb-dev/usr/share/doc/libs3-dev/copyright
+ DESTDIR=$(BUILD)/deb-dev/usr $(MAKE) install
+ rm -rf $(BUILD)/deb-dev/usr/bin
+ rm -f $(BUILD)/deb-dev/usr/lib/libs3.so*
+ @mkdir -p $(dir $@)
+ fakeroot dpkg-deb -b $(BUILD)/deb-dev $@
+ mv $@ $(BUILD)/pkg/libs3-dev_$(LIBS3_VER)_$(DEBARCH).deb
+
+$(BUILD)/deb/DEBIAN/control: debian/control
+ @mkdir -p $(dir $@)
+ echo -n "Depends: " > $@
+ dpkg-shlibdeps -O $(BUILD)/lib/libs3.so.$(LIBS3_VER_MAJOR) | \
+ cut -d '=' -f 2- >> $@
+ sed -e 's/LIBS3_VERSION/$(LIBS3_VER)/' \
+ < $< | sed -e 's/DEBIAN_ARCHITECTURE/$(DEBARCH)/' | \
+ grep -v ^Source: >> $@
+
+$(BUILD)/deb-dev/DEBIAN/control: debian/control.dev
+ @mkdir -p $(dir $@)
+ sed -e 's/LIBS3_VERSION/$(LIBS3_VER)/' \
+ < $< | sed -e 's/DEBIAN_ARCHITECTURE/$(DEBARCH)/' > $@
+
+$(BUILD)/deb/DEBIAN/shlibs:
+ echo -n "libs3 $(LIBS3_VER_MAJOR) libs3 " > $@
+ echo "(>= $(LIBS3_VER))" >> $@
+
+$(BUILD)/deb/DEBIAN/postinst: debian/postinst
+ @mkdir -p $(dir $@)
+ cp $< $@
+
+$(BUILD)/deb/usr/share/doc/libs3/copyright: LICENSE
+ @mkdir -p $(dir $@)
+ cp $< $@
+ @echo >> $@
+ @echo -n "An alternate location for the GNU General Public " >> $@
+ @echo "License version 3 on Debian" >> $@
+ @echo "systems is /usr/share/common-licenses/GPL-3." >> $@
+
+$(BUILD)/deb-dev/usr/share/doc/libs3-dev/copyright: LICENSE
+ @mkdir -p $(dir $@)
+ cp $< $@
+ @echo >> $@
+ @echo -n "An alternate location for the GNU General Public " >> $@
+ @echo "License version 3 on Debian" >> $@
+ @echo "systems is /usr/share/common-licenses/GPL-3." >> $@
+
+$(BUILD)/deb/usr/share/doc/libs3/changelog.gz: debian/changelog
+ @mkdir -p $(dir $@)
+ gzip --best -c $< > $@
+
+$(BUILD)/deb-dev/usr/share/doc/libs3-dev/changelog.gz: debian/changelog
+ @mkdir -p $(dir $@)
+ gzip --best -c $< > $@
+
+$(BUILD)/deb/usr/share/doc/libs3/changelog.Debian.gz: debian/changelog.Debian
+ @mkdir -p $(dir $@)
+ gzip --best -c $< > $@
+
+$(BUILD)/deb-dev/usr/share/doc/libs3-dev/changelog.Debian.gz: \
+ debian/changelog.Debian
+ @mkdir -p $(dir $@)
+ gzip --best -c $< > $@
+
+
+# --------------------------------------------------------------------------
+# Compile target patterns
+
+$(BUILD)/obj/%.o: src/%.c
+ @mkdir -p $(dir $@)
+ gcc $(CFLAGS) -o $@ -c $<
+
+$(BUILD)/obj/%.do: src/%.c
+ @mkdir -p $(dir $@)
+ gcc $(CFLAGS) -fpic -fPIC -o $@ -c $<
+
+
+# --------------------------------------------------------------------------
+# libs3 library targets
+
+LIBS3_SHARED = $(BUILD)/lib/libs3.so.$(LIBS3_VER_MAJOR)
+
+.PHONY: libs3
+libs3: $(LIBS3_SHARED) $(BUILD)/lib/libs3.a
+
+LIBS3_SOURCES := src/acl.c src/bucket.c src/error_parser.c src/general.c \
+ src/object.c src/request.c src/request_context.c \
+ src/response_headers_handler.c src/service_access_logging.c \
+ src/service.c src/simplexml.c src/util.c
+
+$(LIBS3_SHARED): $(LIBS3_SOURCES:src/%.c=$(BUILD)/obj/%.do)
+ @mkdir -p $(dir $@)
+ gcc -shared -Wl,-soname,libs3.so.$(LIBS3_VER_MAJOR) -o $@ $^ $(LDFLAGS)
+
+$(BUILD)/lib/libs3.a: $(LIBS3_SOURCES:src/%.c=$(BUILD)/obj/%.o)
+ @mkdir -p $(dir $@)
+ $(AR) cr $@ $^
+
+
+# --------------------------------------------------------------------------
+# Driver program targets
+
+.PHONY: s3
+s3: $(BUILD)/bin/s3
+
+$(BUILD)/bin/s3: $(BUILD)/obj/s3.o $(LIBS3_SHARED)
+ @mkdir -p $(dir $@)
+ gcc -o $@ $^ $(LDFLAGS)
+
+
+# --------------------------------------------------------------------------
+# libs3 header targets
+
+.PHONY: headers
+headers: $(BUILD)/include/libs3.h
+
+$(BUILD)/include/libs3.h: inc/libs3.h
+ @mkdir -p $(dir $@)
+ cp $< $@
+
+
+# --------------------------------------------------------------------------
+# Test targets
+
+.PHONY: test
+test: $(BUILD)/bin/testsimplexml
+
+$(BUILD)/bin/testsimplexml: $(BUILD)/obj/testsimplexml.o $(BUILD)/lib/libs3.a
+ @mkdir -p $(dir $@)
+ gcc -o $@ $^ $(LIBXML2_LIBS)
+
+
+# --------------------------------------------------------------------------
+# Clean target
+
+.PHONY: clean
+clean:
+ rm -rf $(BUILD)
--- /dev/null
+# GNUmakefile
+#
+# Copyright 2008 Bryan Ischo <bryan@ischo.com>
+#
+# This file is part of libs3.
+#
+# libs3 is free software: you can redistribute it and/or modify it under the
+# terms of the GNU General Public License as published by the Free Software
+# Foundation, version 3 of the License.
+#
+# In addition, as a special exception, the copyright holders give
+# permission to link the code of this library and its programs with the
+# OpenSSL library, and distribute linked combinations including the two.
+#
+# libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License version 3
+# along with libs3, in a file named COPYING. If not, see
+# <http://www.gnu.org/licenses/>.
+
+# I tried to use the autoconf/automake/autolocal/etc (i.e. autohell) tools
+# but I just couldn't stomach them. Since this is a Makefile for POSIX
+# systems, I will simply do away with autohell completely and use a GNU
+# Makefile. GNU make ought to be available pretty much everywhere, so I
+# don't see this being a significant issue for portability.
+
+# All commands assume a GNU compiler. For systems which do not use a GNU
+# compiler, write scripts with the same names as these commands, and taking
+# the same arguments, and translate the arguments and commands into the
+# appropriate non-POSIX ones as needed. libs3 assumes a GNU toolchain as
+# the most portable way to build software possible. Non-POSIX, non-GNU
+# systems can do the work of supporting this build infrastructure.
+
+
+# --------------------------------------------------------------------------
+# Set libs3 version number
+
+LIBS3_VER_MAJOR := 1
+LIBS3_VER_MINOR := 4
+LIBS3_VER := $(LIBS3_VER_MAJOR).$(LIBS3_VER_MINOR)
+
+
+# --------------------------------------------------------------------------
+# BUILD directory
+ifndef BUILD
+ BUILD := build
+endif
+
+
+# --------------------------------------------------------------------------
+# DESTDIR directory
+ifndef DESTDIR
+ DESTDIR := libs3-$(LIBS3_VER)
+endif
+
+
+# --------------------------------------------------------------------------
+# Acquire configuration information for libraries that libs3 depends upon
+
+ifndef CURL_LIBS
+ CURL_LIBS := -Lc:\libs3-libs\bin -lcurl
+endif
+
+ifndef CURL_CFLAGS
+ CURL_CFLAGS := -Ic:\libs3-libs\include
+endif
+
+ifndef LIBXML2_LIBS
+ LIBXML2_LIBS := -Lc:\libs3-libs\bin -lxml2
+endif
+
+ifndef LIBXML2_CFLAGS
+ LIBXML2_CFLAGS := -Ic:\libs3-libs\include
+endif
+
+
+# --------------------------------------------------------------------------
+# These CFLAGS assume a GNU compiler. For other compilers, write a script
+# which converts these arguments into their equivalent for that particular
+# compiler.
+
+ifndef CFLAGS
+ CFLAGS = -O3
+endif
+
+CFLAGS += -Wall -Werror -Wshadow -Wextra -Iinc \
+ $(CURL_CFLAGS) $(LIBXML2_CFLAGS) \
+ -DLIBS3_VER_MAJOR=\"$(LIBS3_VER_MAJOR)\" \
+ -DLIBS3_VER_MINOR=\"$(LIBS3_VER_MINOR)\" \
+ -DLIBS3_VER=\"$(LIBS3_VER)\" \
+ -D__STRICT_ANSI__ \
+ -D_ISOC99_SOURCE \
+ -D_POSIX_C_SOURCE=200112L \
+ -Dsleep=Sleep -DFOPEN_EXTRA_FLAGS=\"b\" \
+ -Iinc/mingw -include windows.h
+
+LDFLAGS = $(CURL_LIBS) $(LIBXML2_LIBS)
+
+# --------------------------------------------------------------------------
+# Default targets are everything
+
+.PHONY: all
+all: exported test
+
+
+# --------------------------------------------------------------------------
+# Exported targets are the library and driver program
+
+.PHONY: exported
+exported: libs3 s3 headers
+
+
+# --------------------------------------------------------------------------
+# Install target
+
+.PHONY: install
+install: exported
+ -@mkdir $(DESTDIR)\bin
+ -@mkdir $(DESTDIR)\include
+ -@mkdir $(DESTDIR)\lib
+ copy $(BUILD)\bin\s3.exe $(DESTDIR)\bin
+ copy $(BUILD)\bin\libs3.dll $(DESTDIR)\bin
+ copy $(BUILD)\lib\libs3.a $(DESTDIR)\lib
+ copy mswin\libs3.def $(DESTDIR)\lib
+ copy $(BUILD)\include\libs3.h $(DESTDIR)\include
+ copy LICENSE $(DESTDIR)
+ copy COPYING $(DESTDIR)
+
+
+# --------------------------------------------------------------------------
+# Compile target patterns
+
+$(BUILD)/obj/%.o: src/%.c
+ -@mkdir $(subst /,\,$(dir $@))
+ gcc $(CFLAGS) -o $@ -c $<
+
+
+# --------------------------------------------------------------------------
+# libs3 library targets
+
+LIBS3_SHARED = $(BUILD)/bin/libs3.dll
+
+.PHONY: libs3
+libs3: $(LIBS3_SHARED) $(BUILD)/lib/libs3.a
+
+LIBS3_SOURCES := src/acl.c src/bucket.c src/error_parser.c src/general.c \
+ src/object.c src/request.c src/request_context.c \
+ src/response_headers_handler.c src/service_access_logging.c \
+ src/service.c src/simplexml.c src/util.c src/mingw_functions.c
+
+$(LIBS3_SHARED): $(LIBS3_SOURCES:src/%.c=$(BUILD)/obj/%.o)
+ -@mkdir $(subst /,\,$(dir $@))
+ gcc -shared -o $@ $^ $(LDFLAGS) -lws2_32
+
+$(BUILD)/lib/libs3.a: $(LIBS3_SHARED)
+ -@mkdir $(subst /,\,$(dir $@))
+ dlltool --def mswin\libs3.def --dllname $(subst /,\,$<) \
+ --output-lib $(subst /,\,$@)
+
+
+# --------------------------------------------------------------------------
+# Driver program targets
+
+.PHONY: s3
+s3: $(BUILD)/bin/s3.exe
+
+$(BUILD)/bin/s3.exe: $(BUILD)/obj/s3.o $(BUILD)/obj/mingw_s3_functions.o \
+ $(BUILD)/lib/libs3.a
+ -@mkdir $(subst /,\,$(dir $@))
+ gcc -o $@ $^ $(LDFLAGS) -lws2_32
+
+
+# --------------------------------------------------------------------------
+# libs3 header targets
+
+.PHONY: headers
+headers: $(BUILD)\include\libs3.h
+
+$(BUILD)\include\libs3.h: inc\libs3.h
+ -@mkdir $(subst /,\,$(dir $@))
+ copy $< $@
+
+
+# --------------------------------------------------------------------------
+# Test targets
+
+.PHONY: test
+test: $(BUILD)/bin/testsimplexml
+
+$(BUILD)/bin/testsimplexml: $(BUILD)/obj/testsimplexml.o \
+ $(BUILD)/obj/simplexml.o
+ -@mkdir $(subst /,\,$(dir $@))
+ gcc -o $@ $^ $(LIBXML2_LIBS)
+
+
+# --------------------------------------------------------------------------
+# Clean target
+
+.PHONY: clean
+clean:
+ mswin\rmrf.bat $(BUILD)
--- /dev/null
+# GNUmakefile
+#
+# Copyright 2008 Bryan Ischo <bryan@ischo.com>
+#
+# This file is part of libs3.
+#
+# libs3 is free software: you can redistribute it and/or modify it under the
+# terms of the GNU General Public License as published by the Free Software
+# Foundation, version 3 of the License.
+#
+# In addition, as a special exception, the copyright holders give
+# permission to link the code of this library and its programs with the
+# OpenSSL library, and distribute linked combinations including the two.
+#
+# libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License version 3
+# along with libs3, in a file named COPYING. If not, see
+# <http://www.gnu.org/licenses/>.
+
+# I tried to use the autoconf/automake/autolocal/etc (i.e. autohell) tools
+# but I just couldn't stomach them. Since this is a Makefile for POSIX
+# systems, I will simply do away with autohell completely and use a GNU
+# Makefile. GNU make ought to be available pretty much everywhere, so I
+# don't see this being a significant issue for portability.
+
+# All commands assume a GNU compiler. For systems which do not use a GNU
+# compiler, write scripts with the same names as these commands, and taking
+# the same arguments, and translate the arguments and commands into the
+# appropriate non-POSIX ones as needed. libs3 assumes a GNU toolchain as
+# the most portable way to build software possible. Non-POSIX, non-GNU
+# systems can do the work of supporting this build infrastructure.
+
+
+# --------------------------------------------------------------------------
+# Set libs3 version number
+
+LIBS3_VER_MAJOR := 1
+LIBS3_VER_MINOR := 4
+LIBS3_VER := $(LIBS3_VER_MAJOR).$(LIBS3_VER_MINOR)
+
+
+# --------------------------------------------------------------------------
+# BUILD directory
+ifndef BUILD
+ BUILD := build
+endif
+
+
+# --------------------------------------------------------------------------
+# DESTDIR directory
+ifndef DESTDIR
+ DESTDIR := /usr
+endif
+
+
+# --------------------------------------------------------------------------
+# Acquire configuration information for libraries that libs3 depends upon
+
+ifndef CURL_LIBS
+ CURL_LIBS := $(shell curl-config --libs)
+endif
+
+ifndef CURL_CFLAGS
+ CURL_CFLAGS := $(shell curl-config --cflags)
+endif
+
+ifndef LIBXML2_LIBS
+ LIBXML2_LIBS := $(shell xml2-config --libs)
+endif
+
+ifndef LIBXML2_CFLAGS
+ LIBXML2_CFLAGS := $(shell xml2-config --cflags)
+endif
+
+
+# --------------------------------------------------------------------------
+# These CFLAGS assume a GNU compiler. For other compilers, write a script
+# which converts these arguments into their equivalent for that particular
+# compiler.
+
+ifndef CFLAGS
+ CFLAGS = -O3
+endif
+
+CFLAGS += -Wall -Werror -Wshadow -Wextra -Iinc \
+ $(CURL_CFLAGS) $(LIBXML2_CFLAGS) \
+ -DLIBS3_VER_MAJOR=\"$(LIBS3_VER_MAJOR)\" \
+ -DLIBS3_VER_MINOR=\"$(LIBS3_VER_MINOR)\" \
+ -DLIBS3_VER=\"$(LIBS3_VER)\" \
+ -D__STRICT_ANSI__ \
+ -D_ISOC99_SOURCE \
+ -fno-common
+
+LDFLAGS = $(CURL_LIBS) $(LIBXML2_LIBS) -lpthread
+
+
+# --------------------------------------------------------------------------
+# Default targets are everything
+
+.PHONY: all
+all: exported test
+
+
+# --------------------------------------------------------------------------
+# Exported targets are the library and driver program
+
+.PHONY: exported
+exported: libs3 s3 headers
+
+
+# --------------------------------------------------------------------------
+# Install target
+
+.PHONY: install
+install: exported
+ install -ps -m u+rwx,go+rx $(BUILD)/bin/s3 $(DESTDIR)/bin/s3
+ install -p -m u+rw,go+r $(BUILD)/include/libs3.h \
+ $(DESTDIR)/include/libs3.h
+ install -p -m u+rw,go+r $(BUILD)/lib/libs3.a $(DESTDIR)/lib/libs3.a
+ install -p -m u+rw,go+r $(BUILD)/lib/libs3.$(LIBS3_VER_MAJOR).dylib \
+ $(DESTDIR)/lib/libs3.$(LIBS3_VER).dylib
+ ln -sf libs3.$(LIBS3_VER).dylib \
+ $(DESTDIR)/lib/libs3.$(LIBS3_VER_MAJOR).dylib
+ ln -sf libs3.$(LIBS3_VER_MAJOR).dylib $(DESTDIR)/lib/libs3.dylib
+
+
+# --------------------------------------------------------------------------
+# Uninstall target
+
+.PHONY: uninstall
+uninstall:
+ rm -f $(DESTDIR)/bin/s3 \
+ $(DESTDIR)/include/libs3.h \
+ $(DESTDIR)/lib/libs3.a \
+ $(DESTDIR)/lib/libs3.dylib \
+ $(DESTDIR)/lib/libs3.$(LIBS3_VER_MAJOR).dylib \
+ $(DESTDIR)/lib/libs3.$(LIBS3_VER).dylib \
+
+
+# --------------------------------------------------------------------------
+# Debian package target
+
+DEBPKG = $(BUILD)/pkg/libs3_$(LIBS3_VER).deb
+DEBDEVPKG = $(BUILD)/pkg/libs3-dev_$(LIBS3_VER).deb
+
+.PHONY: deb
+deb: $(DEBPKG) $(DEBDEVPKG)
+
+$(DEBPKG): DEBARCH = $(shell dpkg-architecture | grep ^DEB_BUILD_ARCH= | \
+ cut -d '=' -f 2)
+$(DEBPKG): exported $(BUILD)/deb/DEBIAN/control $(BUILD)/deb/DEBIAN/shlibs \
+ $(BUILD)/deb/DEBIAN/postinst \
+ $(BUILD)/deb/usr/share/doc/libs3/changelog.gz \
+ $(BUILD)/deb/usr/share/doc/libs3/changelog.Debian.gz \
+ $(BUILD)/deb/usr/share/doc/libs3/copyright
+ DESTDIR=$(BUILD)/deb/usr $(MAKE) install
+ rm -rf $(BUILD)/deb/usr/include
+ rm -f $(BUILD)/deb/usr/lib/libs3.a
+ @mkdir -p $(dir $@)
+ fakeroot dpkg-deb -b $(BUILD)/deb $@
+ mv $@ $(BUILD)/pkg/libs3_$(LIBS3_VER)_$(DEBARCH).deb
+
+$(DEBDEVPKG): DEBARCH = $(shell dpkg-architecture | grep ^DEB_BUILD_ARCH= | \
+ cut -d '=' -f 2)
+$(DEBDEVPKG): exported $(BUILD)/deb-dev/DEBIAN/control \
+ $(BUILD)/deb-dev/usr/share/doc/libs3-dev/changelog.gz \
+ $(BUILD)/deb-dev/usr/share/doc/libs3-dev/changelog.Debian.gz \
+ $(BUILD)/deb-dev/usr/share/doc/libs3-dev/copyright
+ DESTDIR=$(BUILD)/deb-dev/usr $(MAKE) install
+ rm -rf $(BUILD)/deb-dev/usr/bin
+ rm -f $(BUILD)/deb-dev/usr/lib/libs3*.dylib
+ @mkdir -p $(dir $@)
+ fakeroot dpkg-deb -b $(BUILD)/deb-dev $@
+ mv $@ $(BUILD)/pkg/libs3-dev_$(LIBS3_VER)_$(DEBARCH).deb
+
+$(BUILD)/deb/DEBIAN/control: debian/control
+ @mkdir -p $(dir $@)
+ echo -n "Depends: " > $@
+ dpkg-shlibdeps -O $(BUILD)/lib/libs3.$(LIBS3_VER_MAJOR).dylib | \
+ cut -d '=' -f 2- >> $@
+ sed -e 's/LIBS3_VERSION/$(LIBS3_VER)/' \
+ < $< | sed -e 's/DEBIAN_ARCHITECTURE/$(DEBARCH)/' | \
+ grep -v ^Source: >> $@
+
+$(BUILD)/deb-dev/DEBIAN/control: debian/control.dev
+ @mkdir -p $(dir $@)
+ sed -e 's/LIBS3_VERSION/$(LIBS3_VER)/' \
+ < $< | sed -e 's/DEBIAN_ARCHITECTURE/$(DEBARCH)/' > $@
+
+$(BUILD)/deb/DEBIAN/shlibs:
+ echo -n "libs3 $(LIBS3_VER_MAJOR) libs3 " > $@
+ echo "(>= $(LIBS3_VER))" >> $@
+
+$(BUILD)/deb/DEBIAN/postinst: debian/postinst
+ @mkdir -p $(dir $@)
+ cp $< $@
+
+$(BUILD)/deb/usr/share/doc/libs3/copyright: LICENSE
+ @mkdir -p $(dir $@)
+ cp $< $@
+ @echo >> $@
+ @echo -n "An alternate location for the GNU General Public " >> $@
+ @echo "License version 3 on Debian" >> $@
+ @echo "systems is /usr/share/common-licenses/GPL-3." >> $@
+
+$(BUILD)/deb-dev/usr/share/doc/libs3-dev/copyright: LICENSE
+ @mkdir -p $(dir $@)
+ cp $< $@
+ @echo >> $@
+ @echo -n "An alternate location for the GNU General Public " >> $@
+ @echo "License version 3 on Debian" >> $@
+ @echo "systems is /usr/share/common-licenses/GPL-3." >> $@
+
+$(BUILD)/deb/usr/share/doc/libs3/changelog.gz: debian/changelog
+ @mkdir -p $(dir $@)
+ gzip --best -c $< > $@
+
+$(BUILD)/deb-dev/usr/share/doc/libs3-dev/changelog.gz: debian/changelog
+ @mkdir -p $(dir $@)
+ gzip --best -c $< > $@
+
+$(BUILD)/deb/usr/share/doc/libs3/changelog.Debian.gz: debian/changelog.Debian
+ @mkdir -p $(dir $@)
+ gzip --best -c $< > $@
+
+$(BUILD)/deb-dev/usr/share/doc/libs3-dev/changelog.Debian.gz: \
+ debian/changelog.Debian
+ @mkdir -p $(dir $@)
+ gzip --best -c $< > $@
+
+
+# --------------------------------------------------------------------------
+# Compile target patterns
+
+$(BUILD)/obj/%.o: src/%.c
+ @mkdir -p $(dir $@)
+ gcc $(CFLAGS) -o $@ -c $<
+
+$(BUILD)/obj/%.do: src/%.c
+ @mkdir -p $(dir $@)
+ gcc $(CFLAGS) -fpic -fPIC -o $@ -c $<
+
+
+# --------------------------------------------------------------------------
+# libs3 library targets
+
+LIBS3_SHARED = $(BUILD)/lib/libs3.$(LIBS3_VER_MAJOR).dylib
+
+.PHONY: libs3
+libs3: $(LIBS3_SHARED) $(LIBS3_SHARED_MAJOR) $(BUILD)/lib/libs3.a
+
+LIBS3_SOURCES := src/acl.c src/bucket.c src/error_parser.c src/general.c \
+ src/object.c src/request.c src/request_context.c \
+ src/response_headers_handler.c src/service_access_logging.c \
+ src/service.c src/simplexml.c src/util.c
+
+$(LIBS3_SHARED): $(LIBS3_SOURCES:src/%.c=$(BUILD)/obj/%.do)
+ @mkdir -p $(dir $@)
+ gcc -dynamiclib -install_name libs3.$(LIBS3_VER_MAJOR).dylib \
+ -compatibility_version $(LIBS3_VER_MAJOR) \
+ -current_version $(LIBS3_VER) -o $@ $^ $(LDFLAGS)
+
+$(BUILD)/lib/libs3.a: $(LIBS3_SOURCES:src/%.c=$(BUILD)/obj/%.o)
+ @mkdir -p $(dir $@)
+ $(AR) cr $@ $^
+
+
+# --------------------------------------------------------------------------
+# Driver program targets
+
+.PHONY: s3
+s3: $(BUILD)/bin/s3
+
+$(BUILD)/bin/s3: $(BUILD)/obj/s3.o $(LIBS3_SHARED)
+ @mkdir -p $(dir $@)
+ gcc -o $@ $^ $(LDFLAGS)
+
+
+# --------------------------------------------------------------------------
+# libs3 header targets
+
+.PHONY: headers
+headers: $(BUILD)/include/libs3.h
+
+$(BUILD)/include/libs3.h: inc/libs3.h
+ @mkdir -p $(dir $@)
+ cp $< $@
+
+
+# --------------------------------------------------------------------------
+# Test targets
+
+.PHONY: test
+test: $(BUILD)/bin/testsimplexml
+
+$(BUILD)/bin/testsimplexml: $(BUILD)/obj/testsimplexml.o $(BUILD)/lib/libs3.a
+ @mkdir -p $(dir $@)
+ gcc -o $@ $^ $(LIBXML2_LIBS)
+
+
+# --------------------------------------------------------------------------
+# Clean target
+
+.PHONY: clean
+clean:
+ rm -rf $(BUILD)
--- /dev/null
+
+To install libs3 on a POSIX system (except Microsoft Windows):
+--------------------------------------------------------------
+
+Note that all POSIX builds have prerequisites, such as development libraries
+that libs3 requires and that must be installed at the time that libs3 is
+built. The easiest way to find out what those are, is to run the build
+command and then observe the results.
+
+*** For RPM-based systems (Fedora Core, Mandrake, etc) ***
+
+* rpmbuild -ta <libs3 archive>
+
+for example:
+
+rpmbuild -ta libs3-0.3.tar.gz
+
+
+*** For dpkg-based systems (Debian, Ubuntu, etc) ***
+
+* make deb
+
+This will produce a Debian package in the build/pkg directory.
+
+
+*** For all other systems ***
+
+* make [DESTDIR=destination root] install
+
+DESTDIR defaults to /usr
+
+
+To install libs3 on a Microsoft Windows system:
+-----------------------------------------------
+
+*** Using MingW ***
+
+* libs3 can be built on Windows using the MingW compiler. No other tool
+ is needed. However, the following libraries are needed to build libs3:
+
+ - curl development libraries
+ - libxml2 development libraries, and the libraries that it requires:
+ - iconv
+ - zlib
+
+ These projects are independent of libs3, and their release schedule and
+ means of distribution would make it very difficult to provide links to
+ the files to download and keep them up-to-date in this file, so no attempt
+ is made here.
+
+ Development libraries and other files can be placed in:
+ c:\libs3-libs\bin
+ c:\libs3-libs\include
+
+ If the above locations are used, then the GNUmakefile.mingw will work with
+ no special caveats. If the above locations are not used, then the following
+ environment variables should be set:
+ CURL_LIBS should be set to the MingW compiler flags needed to locate and
+ link in the curl libraries
+ CURL_CFLAGS should be set to the MingW compiler flags needed to locate and
+ include the curl headers
+ LIBXML2_LIBS should be set to the MingW compiler flags needed to locate and
+ link in the libxml2 libraries
+ LIBXML2_CFLAGS should be set to the MingW compiler flags needed to locate and
+ include the libxml2 headers
+
+* mingw32-make [DESTDIR=destination] -f GNUmakefile.mingw install
+
+DESTDIR defaults to libs3-<version>
+
+* DESTDIR can be zipped up into a .zip file for distribution. For best
+ results, the dependent libraries (curl, openssl, etc) should be included,
+ along with their licenses.
--- /dev/null
+Copyright 2008 Bryan Ischo <bryan@ischo.com>
+
+libs3 is free software: you can redistribute it and/or modify it under the
+terms of the GNU General Public License as published by the Free Software
+Foundation, version 3 of the License.
+
+In addition, as a special exception, the copyright holders give
+permission to link the code of this library and its programs with the
+OpenSSL library, and distribute linked combinations including the two.
+
+libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+details.
+
+You should have received a copy of the GNU General Public License version 3
+along with libs3, in a file named COPYING. If not, see
+<http://www.gnu.org/licenses/>.
+
+
--- /dev/null
+This directory contains the libs3 library.
+
+The libs3 library is free software. See the file LICENSE for copying
+permission.
--- /dev/null
+* Implement functions for generating form stuff for posting to s3
+
+* Write s3 man page
--- /dev/null
+# Contributor: Bryan Ischo <bryan@ischo.com>
+pkgname=libs3
+pkgver=1.4
+pkgrel=1
+pkgdesc="C Library and Tools for Amazon S3 Access"
+arch=('i686' 'x86_64')
+url="http://libs3.ischo.com/index.html"
+license=('GPL')
+groups=()
+depends=('libxml2' 'openssl')
+makedepends=('make')
+provides=()
+conflicts=()
+replaces=()
+backup=()
+options=()
+install=
+source=(http://libs3.ischo.com/$pkgname-$pkgver.tar.gz)
+noextract=()
+md5sums=('source md5') #generate with 'makepkg -g'
+
+build() {
+ cd "$srcdir/$pkgname-$pkgver"
+
+ make exported || return 1
+ cp -a build/{bin,include,lib} $pkgdir
+}
+
+# vim:set ts=2 sw=2 et:
--- /dev/null
+libs3 (all) unstable; urgency=low
+
+ * This file is not maintained. See project source code for changes.
+
+ -- Bryan Ischo <bryan@ischo.com> Wed, 06 Aug 2008 09:36:43 -0400
--- /dev/null
+libs3 (all) unstable; urgency=low
+
+ * libs3 Debian maintainer and upstream author are identical.
+ Therefore see normal changelog file for Debian changes.
+
+ -- Bryan Ischo <bryan@ischo.com> Wed, 06 Aug 2008 09:36:43 -0400
--- /dev/null
+Package: libs3
+Source: THIS LINE WILL BE REMOVED, dpkg-shlibdepends NEEDS IT
+Version: LIBS3_VERSION
+Architecture: DEBIAN_ARCHITECTURE
+Section: net
+Priority: extra
+Maintainer: Bryan Ischo <bryan@ischo.com>
+Homepage: http://libs3.ischo.com/index.html
+Description: C Library and Tools for Amazon S3 Access
+ This package includes the libs3 shared object library, needed to run
+ applications compiled against libs3, and additionally contains the s3
+ utility for accessing Amazon S3.
--- /dev/null
+Package: libs3-dev
+Version: LIBS3_VERSION
+Architecture: DEBIAN_ARCHITECTURE
+Section: libdevel
+Priority: extra
+Depends: libs3 (>= LIBS3_VERSION)
+Maintainer: Bryan Ischo <bryan@ischo.com>
+Homepage: http://libs3.ischo.com/index.html
+Description: C Development Library for Amazon S3 Access
+ This library provides an API for using Amazon's S3 service (see
+ http://s3.amazonaws.com). Its design goals are:
+ .
+ - To provide a simple and straightforward API for accessing all of S3's
+ functionality
+ - To not require the developer using libs3 to need to know anything about:
+ - HTTP
+ - XML
+ - SSL
+ In other words, this API is meant to stand on its own, without requiring
+ any implicit knowledge of how S3 services are accessed using HTTP
+ protocols.
+ - To be usable from multithreaded code
+ - To be usable by code which wants to process multiple S3 requests
+ simultaneously from a single thread
+ - To be usable in the simple, straightforward way using sequentialized
+ blocking requests
--- /dev/null
+#!/bin/sh
+
+ldconfig
--- /dev/null
+# Doxyfile 1.2.14
+
+# This file describes the settings to be used by the documentation system
+# doxygen (www.doxygen.org) for a project
+#
+# All text after a hash (#) is considered a comment and will be ignored
+# The format is:
+# TAG = value [value, ...]
+# For lists items can also be appended using:
+# TAG += value [value, ...]
+# Values that contain spaces should be placed between quotes (" ")
+
+#---------------------------------------------------------------------------
+# General configuration options
+#---------------------------------------------------------------------------
+
+# The PROJECT_NAME tag is a single word (or a sequence of words surrounded
+# by quotes) that should identify the project.
+
+PROJECT_NAME = libs3
+
+# The PROJECT_NUMBER tag can be used to enter a project or revision number.
+# This could be handy for archiving the generated documentation or
+# if some version control system is used.
+
+PROJECT_NUMBER = 1.4
+
+# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute)
+# base path where the generated documentation will be put.
+# If a relative path is entered, it will be relative to the location
+# where doxygen was started. If left blank the current directory will be used.
+
+OUTPUT_DIRECTORY = dox
+
+# The OUTPUT_LANGUAGE tag is used to specify the language in which all
+# documentation generated by doxygen is written. Doxygen will use this
+# information to generate all constant output in the proper language.
+# The default language is English, other supported languages are:
+# Brazilian, Chinese, Croatian, Czech, Danish, Dutch, Finnish, French,
+# German, Greek, Hungarian, Italian, Japanese, Korean, Norwegian, Polish,
+# Portuguese, Romanian, Russian, Slovak, Slovene, Spanish and Swedish.
+
+OUTPUT_LANGUAGE = English
+
+# If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in
+# documentation are documented, even if no documentation was available.
+# Private class members and static file members will be hidden unless
+# the EXTRACT_PRIVATE and EXTRACT_STATIC tags are set to YES
+
+EXTRACT_ALL = YES
+
+# If the EXTRACT_PRIVATE tag is set to YES all private members of a class
+# will be included in the documentation.
+
+EXTRACT_PRIVATE = YES
+
+# If the EXTRACT_STATIC tag is set to YES all static members of a file
+# will be included in the documentation.
+
+EXTRACT_STATIC = YES
+
+# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs)
+# defined locally in source files will be included in the documentation.
+# If set to NO only classes defined in header files are included.
+
+EXTRACT_LOCAL_CLASSES = YES
+
+# If the HIDE_UNDOC_MEMBERS tag is set to YES, Doxygen will hide all
+# undocumented members of documented classes, files or namespaces.
+# If set to NO (the default) these members will be included in the
+# various overviews, but no documentation section is generated.
+# This option has no effect if EXTRACT_ALL is enabled.
+
+HIDE_UNDOC_MEMBERS = NO
+
+# If the HIDE_UNDOC_CLASSES tag is set to YES, Doxygen will hide all
+# undocumented classes that are normally visible in the class hierarchy.
+# If set to NO (the default) these class will be included in the various
+# overviews. This option has no effect if EXTRACT_ALL is enabled.
+
+HIDE_UNDOC_CLASSES = NO
+
+# If the BRIEF_MEMBER_DESC tag is set to YES (the default) Doxygen will
+# include brief member descriptions after the members that are listed in
+# the file and class documentation (similar to JavaDoc).
+# Set to NO to disable this.
+
+BRIEF_MEMBER_DESC = YES
+
+# If the REPEAT_BRIEF tag is set to YES (the default) Doxygen will prepend
+# the brief description of a member or function before the detailed description.
+# Note: if both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the
+# brief descriptions will be completely suppressed.
+
+REPEAT_BRIEF = YES
+
+# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then
+# Doxygen will generate a detailed section even if there is only a brief
+# description.
+
+ALWAYS_DETAILED_SEC = NO
+
+# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all inherited
+# members of a class in the documentation of that class as if those members were
+# ordinary class members. Constructors, destructors and assignment operators of
+# the base classes will not be shown.
+
+INLINE_INHERITED_MEMB = NO
+
+# If the FULL_PATH_NAMES tag is set to YES then Doxygen will prepend the full
+# path before files name in the file list and in the header files. If set
+# to NO the shortest path that makes the file name unique will be used.
+
+FULL_PATH_NAMES = NO
+
+# If the FULL_PATH_NAMES tag is set to YES then the STRIP_FROM_PATH tag
+# can be used to strip a user defined part of the path. Stripping is
+# only done if one of the specified strings matches the left-hand part of
+# the path. It is allowed to use relative paths in the argument list.
+
+STRIP_FROM_PATH =
+
+# The INTERNAL_DOCS tag determines if documentation
+# that is typed after a \internal command is included. If the tag is set
+# to NO (the default) then the documentation will be excluded.
+# Set it to YES to include the internal documentation.
+
+INTERNAL_DOCS = NO
+
+# Setting the STRIP_CODE_COMMENTS tag to YES (the default) will instruct
+# doxygen to hide any special comment blocks from generated source code
+# fragments. Normal C and C++ comments will always remain visible.
+
+STRIP_CODE_COMMENTS = YES
+
+# If the CASE_SENSE_NAMES tag is set to NO then Doxygen will only generate
+# file names in lower case letters. If set to YES upper case letters are also
+# allowed. This is useful if you have classes or files whose names only differ
+# in case and if your file system supports case sensitive file names. Windows
+# users are adviced to set this option to NO.
+
+CASE_SENSE_NAMES = YES
+
+# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter
+# (but less readable) file names. This can be useful is your file systems
+# doesn't support long names like on DOS, Mac, or CD-ROM.
+
+SHORT_NAMES = NO
+
+# If the HIDE_SCOPE_NAMES tag is set to NO (the default) then Doxygen
+# will show members with their full class and namespace scopes in the
+# documentation. If set to YES the scope will be hidden.
+
+HIDE_SCOPE_NAMES = NO
+
+# If the VERBATIM_HEADERS tag is set to YES (the default) then Doxygen
+# will generate a verbatim copy of the header file for each class for
+# which an include is specified. Set to NO to disable this.
+
+VERBATIM_HEADERS = YES
+
+# If the SHOW_INCLUDE_FILES tag is set to YES (the default) then Doxygen
+# will put list of the files that are included by a file in the documentation
+# of that file.
+
+SHOW_INCLUDE_FILES = YES
+
+# If the JAVADOC_AUTOBRIEF tag is set to YES then Doxygen
+# will interpret the first line (until the first dot) of a JavaDoc-style
+# comment as the brief description. If set to NO, the JavaDoc
+# comments will behave just like the Qt-style comments (thus requiring an
+# explict @brief command for a brief description.
+
+JAVADOC_AUTOBRIEF = NO
+
+# If the INHERIT_DOCS tag is set to YES (the default) then an undocumented
+# member inherits the documentation from any documented member that it
+# reimplements.
+
+INHERIT_DOCS = YES
+
+# If the INLINE_INFO tag is set to YES (the default) then a tag [inline]
+# is inserted in the documentation for inline members.
+
+INLINE_INFO = YES
+
+# If the SORT_MEMBER_DOCS tag is set to YES (the default) then doxygen
+# will sort the (detailed) documentation of file and class members
+# alphabetically by member name. If set to NO the members will appear in
+# declaration order.
+
+SORT_MEMBER_DOCS = NO
+
+# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC
+# tag is set to YES, then doxygen will reuse the documentation of the first
+# member in the group (if any) for the other members of the group. By default
+# all members of a group must be documented explicitly.
+
+DISTRIBUTE_GROUP_DOC = NO
+
+# The TAB_SIZE tag can be used to set the number of spaces in a tab.
+# Doxygen uses this value to replace tabs by spaces in code fragments.
+
+TAB_SIZE = 8
+
+# The GENERATE_TODOLIST tag can be used to enable (YES) or
+# disable (NO) the todo list. This list is created by putting \todo
+# commands in the documentation.
+
+GENERATE_TODOLIST = YES
+
+# The GENERATE_TESTLIST tag can be used to enable (YES) or
+# disable (NO) the test list. This list is created by putting \test
+# commands in the documentation.
+
+GENERATE_TESTLIST = YES
+
+# The GENERATE_BUGLIST tag can be used to enable (YES) or
+# disable (NO) the bug list. This list is created by putting \bug
+# commands in the documentation.
+
+GENERATE_BUGLIST = YES
+
+# This tag can be used to specify a number of aliases that acts
+# as commands in the documentation. An alias has the form "name=value".
+# For example adding "sideeffect=\par Side Effects:\n" will allow you to
+# put the command \sideeffect (or @sideeffect) in the documentation, which
+# will result in a user defined paragraph with heading "Side Effects:".
+# You can put \n's in the value part of an alias to insert newlines.
+
+ALIASES =
+
+# The ENABLED_SECTIONS tag can be used to enable conditional
+# documentation sections, marked by \if sectionname ... \endif.
+
+ENABLED_SECTIONS =
+
+# The MAX_INITIALIZER_LINES tag determines the maximum number of lines
+# the initial value of a variable or define consist of for it to appear in
+# the documentation. If the initializer consists of more lines than specified
+# here it will be hidden. Use a value of 0 to hide initializers completely.
+# The appearance of the initializer of individual variables and defines in the
+# documentation can be controlled using \showinitializer or \hideinitializer
+# command in the documentation regardless of this setting.
+
+MAX_INITIALIZER_LINES = 30
+
+# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
+# only. Doxygen will then generate output that is more tailored for C.
+# For instance some of the names that are used will be different. The list
+# of all members will be omitted, etc.
+
+OPTIMIZE_OUTPUT_FOR_C = NO
+
+# Set the SHOW_USED_FILES tag to NO to disable the list of files generated
+# at the bottom of the documentation of classes and structs. If set to YES the
+# list will mention the files that were used to generate the documentation.
+
+SHOW_USED_FILES = YES
+
+#---------------------------------------------------------------------------
+# configuration options related to warning and progress messages
+#---------------------------------------------------------------------------
+
+# The QUIET tag can be used to turn on/off the messages that are generated
+# by doxygen. Possible values are YES and NO. If left blank NO is used.
+
+QUIET = NO
+
+# The WARNINGS tag can be used to turn on/off the warning messages that are
+# generated by doxygen. Possible values are YES and NO. If left blank
+# NO is used.
+
+WARNINGS = YES
+
+# If WARN_IF_UNDOCUMENTED is set to YES, then doxygen will generate warnings
+# for undocumented members. If EXTRACT_ALL is set to YES then this flag will
+# automatically be disabled.
+
+WARN_IF_UNDOCUMENTED = YES
+
+# The WARN_FORMAT tag determines the format of the warning messages that
+# doxygen can produce. The string should contain the $file, $line, and $text
+# tags, which will be replaced by the file and line number from which the
+# warning originated and the warning text.
+
+WARN_FORMAT = "$file:$line: $text"
+
+# The WARN_LOGFILE tag can be used to specify a file to which warning
+# and error messages should be written. If left blank the output is written
+# to stderr.
+
+WARN_LOGFILE =
+
+#---------------------------------------------------------------------------
+# configuration options related to the input files
+#---------------------------------------------------------------------------
+
+# The INPUT tag can be used to specify the files and/or directories that contain
+# documented source files. You may enter file names like "myfile.cpp" or
+# directories like "/usr/src/myproject". Separate the files or directories
+# with spaces.
+
+INPUT = inc/libs3.h
+
+# If the value of the INPUT tag contains directories, you can use the
+# FILE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp
+# and *.h) to filter out the source-files in the directories. If left
+# blank the following patterns are tested:
+# *.c *.cc *.cxx *.cpp *.c++ *.java *.ii *.ixx *.ipp *.i++ *.inl *.h *.hh *.hxx *.hpp
+# *.h++ *.idl *.odl
+
+FILE_PATTERNS =
+
+# The RECURSIVE tag can be used to turn specify whether or not subdirectories
+# should be searched for input files as well. Possible values are YES and NO.
+# If left blank NO is used.
+
+RECURSIVE = YES
+
+# The EXCLUDE tag can be used to specify files and/or directories that should
+# excluded from the INPUT source files. This way you can easily exclude a
+# subdirectory from a directory tree whose root is specified with the INPUT tag.
+
+EXCLUDE =
+
+# The EXCLUDE_SYMLINKS tag can be used select whether or not files or directories
+# that are symbolic links (a Unix filesystem feature) are excluded from the input.
+
+EXCLUDE_SYMLINKS = NO
+
+# If the value of the INPUT tag contains directories, you can use the
+# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude
+# certain files from those directories.
+
+EXCLUDE_PATTERNS =
+
+# The EXAMPLE_PATH tag can be used to specify one or more files or
+# directories that contain example code fragments that are included (see
+# the \include command).
+
+EXAMPLE_PATH =
+
+# If the value of the EXAMPLE_PATH tag contains directories, you can use the
+# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp
+# and *.h) to filter out the source-files in the directories. If left
+# blank all files are included.
+
+EXAMPLE_PATTERNS =
+
+# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
+# searched for input files to be used with the \include or \dontinclude
+# commands irrespective of the value of the RECURSIVE tag.
+# Possible values are YES and NO. If left blank NO is used.
+
+EXAMPLE_RECURSIVE = NO
+
+# The IMAGE_PATH tag can be used to specify one or more files or
+# directories that contain image that are included in the documentation (see
+# the \image command).
+
+IMAGE_PATH =
+
+# The INPUT_FILTER tag can be used to specify a program that doxygen should
+# invoke to filter for each input file. Doxygen will invoke the filter program
+# by executing (via popen()) the command <filter> <input-file>, where <filter>
+# is the value of the INPUT_FILTER tag, and <input-file> is the name of an
+# input file. Doxygen will then use the output that the filter program writes
+# to standard output.
+
+INPUT_FILTER =
+
+# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using
+# INPUT_FILTER) will be used to filter the input files when producing source
+# files to browse.
+
+FILTER_SOURCE_FILES = NO
+
+#---------------------------------------------------------------------------
+# configuration options related to source browsing
+#---------------------------------------------------------------------------
+
+# If the SOURCE_BROWSER tag is set to YES then a list of source files will
+# be generated. Documented entities will be cross-referenced with these sources.
+
+SOURCE_BROWSER = NO
+
+# Setting the INLINE_SOURCES tag to YES will include the body
+# of functions and classes directly in the documentation.
+
+INLINE_SOURCES = NO
+
+# If the REFERENCED_BY_RELATION tag is set to YES (the default)
+# then for each documented function all documented
+# functions referencing it will be listed.
+
+REFERENCED_BY_RELATION = YES
+
+# If the REFERENCES_RELATION tag is set to YES (the default)
+# then for each documented function all documented entities
+# called/used by that function will be listed.
+
+REFERENCES_RELATION = YES
+
+#---------------------------------------------------------------------------
+# configuration options related to the alphabetical class index
+#---------------------------------------------------------------------------
+
+# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index
+# of all compounds will be generated. Enable this if the project
+# contains a lot of classes, structs, unions or interfaces.
+
+ALPHABETICAL_INDEX = NO
+
+# If the alphabetical index is enabled (see ALPHABETICAL_INDEX) then
+# the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns
+# in which this list will be split (can be a number in the range [1..20])
+
+COLS_IN_ALPHA_INDEX = 5
+
+# In case all classes in a project start with a common prefix, all
+# classes will be put under the same header in the alphabetical index.
+# The IGNORE_PREFIX tag can be used to specify one or more prefixes that
+# should be ignored while generating the index headers.
+
+IGNORE_PREFIX =
+
+#---------------------------------------------------------------------------
+# configuration options related to the HTML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_HTML tag is set to YES (the default) Doxygen will
+# generate HTML output.
+
+GENERATE_HTML = YES
+
+# The HTML_OUTPUT tag is used to specify where the HTML docs will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be
+# put in front of it. If left blank `html' will be used as the default path.
+
+HTML_OUTPUT = html
+
+# The HTML_FILE_EXTENSION tag can be used to specify the file extension for
+# each generated HTML page (for example: .htm,.php,.asp). If it is left blank
+# doxygen will generate files with .html extension.
+
+HTML_FILE_EXTENSION = .html
+
+# The HTML_HEADER tag can be used to specify a personal HTML header for
+# each generated HTML page. If it is left blank doxygen will generate a
+# standard header.
+
+HTML_HEADER =
+
+# The HTML_FOOTER tag can be used to specify a personal HTML footer for
+# each generated HTML page. If it is left blank doxygen will generate a
+# standard footer.
+
+HTML_FOOTER =
+
+# The HTML_STYLESHEET tag can be used to specify a user defined cascading
+# style sheet that is used by each HTML page. It can be used to
+# fine-tune the look of the HTML output. If the tag is left blank doxygen
+# will generate a default style sheet
+
+HTML_STYLESHEET =
+
+# If the HTML_ALIGN_MEMBERS tag is set to YES, the members of classes,
+# files or namespaces will be aligned in HTML using tables. If set to
+# NO a bullet list will be used.
+
+HTML_ALIGN_MEMBERS = YES
+
+# If the GENERATE_HTMLHELP tag is set to YES, additional index files
+# will be generated that can be used as input for tools like the
+# Microsoft HTML help workshop to generate a compressed HTML help file (.chm)
+# of the generated HTML documentation.
+
+GENERATE_HTMLHELP = NO
+
+# If the GENERATE_HTMLHELP tag is set to YES, the GENERATE_CHI flag
+# controls if a separate .chi index file is generated (YES) or that
+# it should be included in the master .chm file (NO).
+
+GENERATE_CHI = NO
+
+# If the GENERATE_HTMLHELP tag is set to YES, the BINARY_TOC flag
+# controls whether a binary table of contents is generated (YES) or a
+# normal table of contents (NO) in the .chm file.
+
+BINARY_TOC = NO
+
+# The TOC_EXPAND flag can be set to YES to add extra items for group members
+# to the contents of the Html help documentation and to the tree view.
+
+TOC_EXPAND = NO
+
+# The DISABLE_INDEX tag can be used to turn on/off the condensed index at
+# top of each HTML page. The value NO (the default) enables the index and
+# the value YES disables it.
+
+DISABLE_INDEX = NO
+
+# This tag can be used to set the number of enum values (range [1..20])
+# that doxygen will group on one line in the generated HTML documentation.
+
+ENUM_VALUES_PER_LINE = 4
+
+# If the GENERATE_TREEVIEW tag is set to YES, a side panel will be
+# generated containing a tree-like index structure (just like the one that
+# is generated for HTML Help). For this to work a browser that supports
+# JavaScript and frames is required (for instance Mozilla, Netscape 4.0+,
+# or Internet explorer 4.0+). Note that for large projects the tree generation
+# can take a very long time. In such cases it is better to disable this feature.
+# Windows users are probably better off using the HTML help feature.
+
+GENERATE_TREEVIEW = YES
+
+# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be
+# used to set the initial width (in pixels) of the frame in which the tree
+# is shown.
+
+TREEVIEW_WIDTH = 250
+
+#---------------------------------------------------------------------------
+# configuration options related to the LaTeX output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_LATEX tag is set to YES (the default) Doxygen will
+# generate Latex output.
+
+GENERATE_LATEX = NO
+
+# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be
+# put in front of it. If left blank `latex' will be used as the default path.
+
+LATEX_OUTPUT = latex
+
+# If the COMPACT_LATEX tag is set to YES Doxygen generates more compact
+# LaTeX documents. This may be useful for small projects and may help to
+# save some trees in general.
+
+COMPACT_LATEX = NO
+
+# The PAPER_TYPE tag can be used to set the paper type that is used
+# by the printer. Possible values are: a4, a4wide, letter, legal and
+# executive. If left blank a4wide will be used.
+
+PAPER_TYPE = a4wide
+
+# The EXTRA_PACKAGES tag can be to specify one or more names of LaTeX
+# packages that should be included in the LaTeX output.
+
+EXTRA_PACKAGES =
+
+# The LATEX_HEADER tag can be used to specify a personal LaTeX header for
+# the generated latex document. The header should contain everything until
+# the first chapter. If it is left blank doxygen will generate a
+# standard header. Notice: only use this tag if you know what you are doing!
+
+LATEX_HEADER =
+
+# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated
+# is prepared for conversion to pdf (using ps2pdf). The pdf file will
+# contain links (just like the HTML output) instead of page references
+# This makes the output suitable for online browsing using a pdf viewer.
+
+PDF_HYPERLINKS = NO
+
+# If the USE_PDFLATEX tag is set to YES, pdflatex will be used instead of
+# plain latex in the generated Makefile. Set this option to YES to get a
+# higher quality PDF documentation.
+
+USE_PDFLATEX = NO
+
+# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode.
+# command to the generated LaTeX files. This will instruct LaTeX to keep
+# running if errors occur, instead of asking the user for help.
+# This option is also used when generating formulas in HTML.
+
+LATEX_BATCHMODE = NO
+
+#---------------------------------------------------------------------------
+# configuration options related to the RTF output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_RTF tag is set to YES Doxygen will generate RTF output
+# The RTF output is optimised for Word 97 and may not look very pretty with
+# other RTF readers or editors.
+
+GENERATE_RTF = NO
+
+# The RTF_OUTPUT tag is used to specify where the RTF docs will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be
+# put in front of it. If left blank `rtf' will be used as the default path.
+
+RTF_OUTPUT = rtf
+
+# If the COMPACT_RTF tag is set to YES Doxygen generates more compact
+# RTF documents. This may be useful for small projects and may help to
+# save some trees in general.
+
+COMPACT_RTF = NO
+
+# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated
+# will contain hyperlink fields. The RTF file will
+# contain links (just like the HTML output) instead of page references.
+# This makes the output suitable for online browsing using WORD or other
+# programs which support those fields.
+# Note: wordpad (write) and others do not support links.
+
+RTF_HYPERLINKS = NO
+
+# Load stylesheet definitions from file. Syntax is similar to doxygen's
+# config file, i.e. a series of assigments. You only have to provide
+# replacements, missing definitions are set to their default value.
+
+RTF_STYLESHEET_FILE =
+
+# Set optional variables used in the generation of an rtf document.
+# Syntax is similar to doxygen's config file.
+
+RTF_EXTENSIONS_FILE =
+
+#---------------------------------------------------------------------------
+# configuration options related to the man page output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_MAN tag is set to YES (the default) Doxygen will
+# generate man pages
+
+GENERATE_MAN = NO
+
+# The MAN_OUTPUT tag is used to specify where the man pages will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be
+# put in front of it. If left blank `man' will be used as the default path.
+
+MAN_OUTPUT = man
+
+# The MAN_EXTENSION tag determines the extension that is added to
+# the generated man pages (default is the subroutine's section .3)
+
+MAN_EXTENSION = .3
+
+# If the MAN_LINKS tag is set to YES and Doxygen generates man output,
+# then it will generate one additional man file for each entity
+# documented in the real man page(s). These additional files
+# only source the real man page, but without them the man command
+# would be unable to find the correct page. The default is NO.
+
+MAN_LINKS = NO
+
+#---------------------------------------------------------------------------
+# configuration options related to the XML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_XML tag is set to YES Doxygen will
+# generate an XML file that captures the structure of
+# the code including all documentation. Note that this
+# feature is still experimental and incomplete at the
+# moment.
+
+GENERATE_XML = NO
+
+#---------------------------------------------------------------------------
+# configuration options for the AutoGen Definitions output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_AUTOGEN_DEF tag is set to YES Doxygen will
+# generate an AutoGen Definitions (see autogen.sf.net) file
+# that captures the structure of the code including all
+# documentation. Note that this feature is still experimental
+# and incomplete at the moment.
+
+GENERATE_AUTOGEN_DEF = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to the preprocessor
+#---------------------------------------------------------------------------
+
+# If the ENABLE_PREPROCESSING tag is set to YES (the default) Doxygen will
+# evaluate all C-preprocessor directives found in the sources and include
+# files.
+
+ENABLE_PREPROCESSING = YES
+
+# If the MACRO_EXPANSION tag is set to YES Doxygen will expand all macro
+# names in the source code. If set to NO (the default) only conditional
+# compilation will be performed. Macro expansion can be done in a controlled
+# way by setting EXPAND_ONLY_PREDEF to YES.
+
+MACRO_EXPANSION = NO
+
+# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES
+# then the macro expansion is limited to the macros specified with the
+# PREDEFINED and EXPAND_AS_PREDEFINED tags.
+
+EXPAND_ONLY_PREDEF = NO
+
+# If the SEARCH_INCLUDES tag is set to YES (the default) the includes files
+# in the INCLUDE_PATH (see below) will be search if a #include is found.
+
+SEARCH_INCLUDES = YES
+
+# The INCLUDE_PATH tag can be used to specify one or more directories that
+# contain include files that are not input files but should be processed by
+# the preprocessor.
+
+INCLUDE_PATH =
+
+# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard
+# patterns (like *.h and *.hpp) to filter out the header-files in the
+# directories. If left blank, the patterns specified with FILE_PATTERNS will
+# be used.
+
+INCLUDE_FILE_PATTERNS =
+
+# The PREDEFINED tag can be used to specify one or more macro names that
+# are defined before the preprocessor is started (similar to the -D option of
+# gcc). The argument of the tag is a list of macros of the form: name
+# or name=definition (no spaces). If the definition and the = are
+# omitted =1 is assumed.
+
+PREDEFINED = DOXYGEN
+
+# If the MACRO_EXPANSION and EXPAND_PREDEF_ONLY tags are set to YES then
+# this tag can be used to specify a list of macro names that should be expanded.
+# The macro definition that is found in the sources will be used.
+# Use the PREDEFINED tag if you want to use a different macro definition.
+
+EXPAND_AS_DEFINED =
+
+# If the SKIP_FUNCTION_MACROS tag is set to YES (the default) then
+# doxygen's preprocessor will remove all function-like macros that are alone
+# on a line and do not end with a semicolon. Such function macros are typically
+# used for boiler-plate code, and will confuse the parser if not removed.
+
+SKIP_FUNCTION_MACROS = YES
+
+#---------------------------------------------------------------------------
+# Configuration::addtions related to external references
+#---------------------------------------------------------------------------
+
+# The TAGFILES tag can be used to specify one or more tagfiles.
+
+TAGFILES =
+
+# When a file name is specified after GENERATE_TAGFILE, doxygen will create
+# a tag file that is based on the input files it reads.
+
+GENERATE_TAGFILE =
+
+# If the ALLEXTERNALS tag is set to YES all external classes will be listed
+# in the class index. If set to NO only the inherited external classes
+# will be listed.
+
+ALLEXTERNALS = NO
+
+# If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed
+# in the modules index. If set to NO, only the current project's groups will
+# be listed.
+
+EXTERNAL_GROUPS = YES
+
+# The PERL_PATH should be the absolute path and name of the perl script
+# interpreter (i.e. the result of `which perl').
+
+PERL_PATH = /usr/bin/perl
+
+#---------------------------------------------------------------------------
+# Configuration options related to the dot tool
+#---------------------------------------------------------------------------
+
+# If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will
+# generate a inheritance diagram (in Html, RTF and LaTeX) for classes with base or
+# super classes. Setting the tag to NO turns the diagrams off. Note that this
+# option is superceded by the HAVE_DOT option below. This is only a fallback. It is
+# recommended to install and use dot, since it yield more powerful graphs.
+
+CLASS_DIAGRAMS = YES
+
+# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is
+# available from the path. This tool is part of Graphviz, a graph visualization
+# toolkit from AT&T and Lucent Bell Labs. The other options in this section
+# have no effect if this option is set to NO (the default)
+
+HAVE_DOT = NO
+
+# If the CLASS_GRAPH and HAVE_DOT tags are set to YES then doxygen
+# will generate a graph for each documented class showing the direct and
+# indirect inheritance relations. Setting this tag to YES will force the
+# the CLASS_DIAGRAMS tag to NO.
+
+CLASS_GRAPH = YES
+
+# If the COLLABORATION_GRAPH and HAVE_DOT tags are set to YES then doxygen
+# will generate a graph for each documented class showing the direct and
+# indirect implementation dependencies (inheritance, containment, and
+# class references variables) of the class with other documented classes.
+
+COLLABORATION_GRAPH = YES
+
+# If set to YES, the inheritance and collaboration graphs will show the
+# relations between templates and their instances.
+
+TEMPLATE_RELATIONS = YES
+
+# If set to YES, the inheritance and collaboration graphs will hide
+# inheritance and usage relations if the target is undocumented
+# or is not a class.
+
+HIDE_UNDOC_RELATIONS = YES
+
+# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDE_GRAPH, and HAVE_DOT
+# tags are set to YES then doxygen will generate a graph for each documented
+# file showing the direct and indirect include dependencies of the file with
+# other documented files.
+
+INCLUDE_GRAPH = YES
+
+# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDED_BY_GRAPH, and
+# HAVE_DOT tags are set to YES then doxygen will generate a graph for each
+# documented header file showing the documented files that directly or
+# indirectly include this file.
+
+INCLUDED_BY_GRAPH = YES
+
+# If the GRAPHICAL_HIERARCHY and HAVE_DOT tags are set to YES then doxygen
+# will graphical hierarchy of all classes instead of a textual one.
+
+GRAPHICAL_HIERARCHY = YES
+
+# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images
+# generated by dot. Possible values are gif, jpg, and png
+# If left blank gif will be used.
+
+DOT_IMAGE_FORMAT = gif
+
+# The tag DOT_PATH can be used to specify the path where the dot tool can be
+# found. If left blank, it is assumed the dot tool can be found on the path.
+
+DOT_PATH =
+
+# The DOTFILE_DIRS tag can be used to specify one or more directories that
+# contain dot files that are included in the documentation (see the
+# \dotfile command).
+
+DOTFILE_DIRS =
+
+# The MAX_DOT_GRAPH_WIDTH tag can be used to set the maximum allowed width
+# (in pixels) of the graphs generated by dot. If a graph becomes larger than
+# this value, doxygen will try to truncate the graph, so that it fits within
+# the specified constraint. Beware that most browsers cannot cope with very
+# large images.
+
+MAX_DOT_GRAPH_WIDTH = 1024
+
+# The MAX_DOT_GRAPH_HEIGHT tag can be used to set the maximum allows height
+# (in pixels) of the graphs generated by dot. If a graph becomes larger than
+# this value, doxygen will try to truncate the graph, so that it fits within
+# the specified constraint. Beware that most browsers cannot cope with very
+# large images.
+
+MAX_DOT_GRAPH_HEIGHT = 1024
+
+# If the GENERATE_LEGEND tag is set to YES (the default) Doxygen will
+# generate a legend page explaining the meaning of the various boxes and
+# arrows in the dot generated graphs.
+
+GENERATE_LEGEND = YES
+
+# If the DOT_CLEANUP tag is set to YES (the default) Doxygen will
+# remove the intermedate dot files that are used to generate
+# the various graphs.
+
+DOT_CLEANUP = YES
+
+#---------------------------------------------------------------------------
+# Configuration::addtions related to the search engine
+#---------------------------------------------------------------------------
+
+# The SEARCHENGINE tag specifies whether or not a search engine should be
+# used. If set to NO the values of all tags below this one will be ignored.
+
+SEARCHENGINE = NO
--- /dev/null
+/** **************************************************************************
+ * error_parser.h
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#ifndef ERROR_PARSER_H
+#define ERROR_PARSER_H
+
+#include "libs3.h"
+#include "simplexml.h"
+#include "string_buffer.h"
+
+
+#define EXTRA_DETAILS_SIZE 8
+
+typedef struct ErrorParser
+{
+ // This is the S3ErrorDetails that this ErrorParser fills in from the
+ // data that it parses
+ S3ErrorDetails s3ErrorDetails;
+
+ // This is the error XML parser
+ SimpleXml errorXmlParser;
+
+ // Set to 1 after the first call to add
+ int errorXmlParserInitialized;
+
+ // Used to buffer the S3 Error Code as it is read in
+ string_buffer(code, 1024);
+
+ // Used to buffer the S3 Error Message as it is read in
+ string_buffer(message, 1024);
+
+ // Used to buffer the S3 Error Resource as it is read in
+ string_buffer(resource, 1024);
+
+ // Used to buffer the S3 Error Further Details as it is read in
+ string_buffer(furtherDetails, 1024);
+
+ // The extra details; we support up to EXTRA_DETAILS_SIZE of them
+ S3NameValue extraDetails[EXTRA_DETAILS_SIZE];
+
+ // This is the buffer from which the names and values used in extraDetails
+ // are allocated
+ string_multibuffer(extraDetailsNamesValues, EXTRA_DETAILS_SIZE * 1024);
+} ErrorParser;
+
+
+// Always call this
+void error_parser_initialize(ErrorParser *errorParser);
+
+S3Status error_parser_add(ErrorParser *errorParser, char *buffer,
+ int bufferSize);
+
+void error_parser_convert_status(ErrorParser *errorParser, S3Status *status);
+
+// Always call this
+void error_parser_deinitialize(ErrorParser *errorParser);
+
+
+#endif /* ERROR_PARSER_H */
--- /dev/null
+/** **************************************************************************
+ * libs3.h
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#ifndef LIBS3_H
+#define LIBS3_H
+
+#include <stdint.h>
+#include <sys/select.h>
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+
+/** **************************************************************************
+ * Overview
+ * --------
+ *
+ * This library provides an API for using Amazon's S3 service (see
+ * http://s3.amazonaws.com). Its design goals are:
+ *
+ * - To provide a simple and straightforward API for accessing all of S3's
+ * functionality
+ * - To not require the developer using libs3 to need to know anything about:
+ * - HTTP
+ * - XML
+ * - SSL
+ * In other words, this API is meant to stand on its own, without requiring
+ * any implicit knowledge of how S3 services are accessed using HTTP
+ * protocols.
+ * - To be usable from multithreaded code
+ * - To be usable by code which wants to process multiple S3 requests
+ * simultaneously from a single thread
+ * - To be usable in the simple, straightforward way using sequentialized
+ * blocking requests
+ *
+ * The general usage pattern of libs3 is:
+ *
+ * - Initialize libs3 once per program by calling S3_initialize() at program
+ * start up time
+ * - Make any number of requests to S3 for getting, putting, or listing
+ * S3 buckets or objects, or modifying the ACLs associated with buckets
+ * or objects, using one of three general approaches:
+ * 1. Simple blocking requests, one at a time
+ * 2. Multiple threads each making simple blocking requests
+ * 3. From a single thread, managing multiple S3 requests simultaneously
+ * using file descriptors and a select()/poll() loop
+ * - Shut down libs3 at program exit time by calling S3_deinitialize()
+ *
+ * All functions which send requests to S3 return their results via a set of
+ * callback functions which must be supplied to libs3 at the time that the
+ * request is initiated. libs3 will call these functions back in the thread
+ * calling the libs3 function if blocking requests are made (i.e., if the
+ * S3RequestContext for the function invocation is passed in as NULL).
+ * If an S3RequestContext is used to drive multiple S3 requests
+ * simultaneously, then the callbacks will be made from the thread which
+ * calls S3_runall_request_context() or S3_runonce_request_context(), or
+ * possibly from the thread which calls S3_destroy_request_context(), if
+ * S3 requests are in progress at the time that this function is called.
+ *
+ * NOTE: Response headers from Amazon S3 are limited to 4K (2K of metas is all
+ * that Amazon supports, and libs3 allows Amazon an additional 2K of headers).
+ *
+ * NOTE: Because HTTP and the S3 REST protocol are highly under-specified,
+ * libs3 must make some assumptions about the maximum length of certain HTTP
+ * elements (such as headers) that it will accept. While efforts have been
+ * made to enforce maximums which are beyond that expected to be needed by any
+ * user of S3, it is always possible that these maximums may be too low in
+ * some rare circumstances. Bug reports should this unlikely situation occur
+ * would be most appreciated.
+ *
+ * Threading Rules
+ * ---------------
+ *
+ * 1. All arguments passed to any function must not be modified directly until
+ * the function returns.
+ * 2. All S3RequestContext and S3Request arguments passed to all functions may
+ * not be passed to any other libs3 function by any other thread until the
+ * function returns.
+ * 3. All functions may be called simultaneously by multiple threads as long
+ * as (1) and (2) are observed, EXCEPT for S3_initialize(), which must be
+ * called from one thread at a time only.
+ * 4. All callbacks will be made in the thread of the caller of the function
+ * which invoked them, so the caller of all libs3 functions should not hold
+ * locks that it would try to re-acquire in a callback, as this may
+ * deadlock.
+ ************************************************************************** **/
+
+
+/** **************************************************************************
+ * Constants
+ ************************************************************************** **/
+
+/**
+ * This is the hostname that all S3 requests will go through; virtual-host
+ * style requests will prepend the bucket name to this host name, and
+ * path-style requests will use this hostname directly
+ **/
+#define S3_HOSTNAME "s3.amazonaws.com"
+
+
+/**
+ * S3_MAX_BUCKET_NAME_SIZE is the maximum size of a bucket name.
+ **/
+
+#define S3_MAX_BUCKET_NAME_SIZE 255
+
+/**
+ * S3_MAX_KEY_SIZE is the maximum size of keys that Amazon S3 supports.
+ **/
+#define S3_MAX_KEY_SIZE 1024
+
+
+/**
+ * S3_MAX_METADATA_SIZE is the maximum number of bytes allowed for
+ * x-amz-meta header names and values in any request passed to Amazon S3
+ **/
+#define S3_MAX_METADATA_SIZE 2048
+
+
+/**
+ * S3_METADATA_HEADER_NAME_PREFIX is the prefix of an S3 "meta header"
+ **/
+#define S3_METADATA_HEADER_NAME_PREFIX "x-amz-meta-"
+
+
+/**
+ * S3_MAX_METADATA_COUNT is the maximum number of x-amz-meta- headers that
+ * could be included in a request to S3. The smallest meta header is
+ * "x-amz-meta-n: v". Since S3 doesn't count the ": " against the total, the
+ * smallest amount of data to count for a header would be the length of
+ * "x-amz-meta-nv".
+ **/
+#define S3_MAX_METADATA_COUNT \
+ (S3_MAX_METADATA_SIZE / (sizeof(S3_METADATA_HEADER_NAME_PREFIX "nv") - 1))
+
+
+/**
+ * S3_MAX_ACL_GRANT_COUNT is the maximum number of ACL grants that may be
+ * set on a bucket or object at one time. It is also the maximum number of
+ * ACL grants that the XML ACL parsing routine will parse.
+ **/
+#define S3_MAX_ACL_GRANT_COUNT 100
+
+
+/**
+ * This is the maximum number of characters (including terminating \0) that
+ * libs3 supports in an ACL grantee email address.
+ **/
+#define S3_MAX_GRANTEE_EMAIL_ADDRESS_SIZE 128
+
+
+/**
+ * This is the maximum number of characters (including terminating \0) that
+ * libs3 supports in an ACL grantee user id.
+ **/
+#define S3_MAX_GRANTEE_USER_ID_SIZE 128
+
+
+/**
+ * This is the maximum number of characters (including terminating \0) that
+ * libs3 supports in an ACL grantee user display name.
+ **/
+#define S3_MAX_GRANTEE_DISPLAY_NAME_SIZE 128
+
+
+/**
+ * This is the maximum number of characters that will be stored in the
+ * return buffer for the utility function which computes an HTTP authenticated
+ * query string
+ **/
+#define S3_MAX_AUTHENTICATED_QUERY_STRING_SIZE \
+ (sizeof("https://" S3_HOSTNAME "/") + (S3_MAX_KEY_SIZE * 3) + \
+ sizeof("?AWSAccessKeyId=") + 32 + sizeof("&Expires=") + 32 + \
+ sizeof("&Signature=") + 28 + 1)
+
+
+/**
+ * This constant is used by the S3_initialize() function, to specify that
+ * the winsock library should be initialized by libs3; only relevent on
+ * Microsoft Windows platforms.
+ **/
+#define S3_INIT_WINSOCK 1
+
+
+/**
+ * This convenience constant is used by the S3_initialize() function to
+ * indicate that all libraries required by libs3 should be initialized.
+ **/
+#define S3_INIT_ALL (S3_INIT_WINSOCK)
+
+
+/** **************************************************************************
+ * Enumerations
+ ************************************************************************** **/
+
+/**
+ * S3Status is a status code as returned by a libs3 function. The meaning of
+ * each status code is defined in the comments for each function which returns
+ * that status.
+ **/
+typedef enum
+{
+ S3StatusOK ,
+
+ /**
+ * Errors that prevent the S3 request from being issued or response from
+ * being read
+ **/
+ S3StatusInternalError ,
+ S3StatusOutOfMemory ,
+ S3StatusInterrupted ,
+ S3StatusInvalidBucketNameTooLong ,
+ S3StatusInvalidBucketNameFirstCharacter ,
+ S3StatusInvalidBucketNameCharacter ,
+ S3StatusInvalidBucketNameCharacterSequence ,
+ S3StatusInvalidBucketNameTooShort ,
+ S3StatusInvalidBucketNameDotQuadNotation ,
+ S3StatusQueryParamsTooLong ,
+ S3StatusFailedToInitializeRequest ,
+ S3StatusMetaDataHeadersTooLong ,
+ S3StatusBadMetaData ,
+ S3StatusBadContentType ,
+ S3StatusContentTypeTooLong ,
+ S3StatusBadMD5 ,
+ S3StatusMD5TooLong ,
+ S3StatusBadCacheControl ,
+ S3StatusCacheControlTooLong ,
+ S3StatusBadContentDispositionFilename ,
+ S3StatusContentDispositionFilenameTooLong ,
+ S3StatusBadContentEncoding ,
+ S3StatusContentEncodingTooLong ,
+ S3StatusBadIfMatchETag ,
+ S3StatusIfMatchETagTooLong ,
+ S3StatusBadIfNotMatchETag ,
+ S3StatusIfNotMatchETagTooLong ,
+ S3StatusHeadersTooLong ,
+ S3StatusKeyTooLong ,
+ S3StatusUriTooLong ,
+ S3StatusXmlParseFailure ,
+ S3StatusEmailAddressTooLong ,
+ S3StatusUserIdTooLong ,
+ S3StatusUserDisplayNameTooLong ,
+ S3StatusGroupUriTooLong ,
+ S3StatusPermissionTooLong ,
+ S3StatusTargetBucketTooLong ,
+ S3StatusTargetPrefixTooLong ,
+ S3StatusTooManyGrants ,
+ S3StatusBadGrantee ,
+ S3StatusBadPermission ,
+ S3StatusXmlDocumentTooLarge ,
+ S3StatusNameLookupError ,
+ S3StatusFailedToConnect ,
+ S3StatusServerFailedVerification ,
+ S3StatusConnectionFailed ,
+ S3StatusAbortedByCallback ,
+
+ /**
+ * Errors from the S3 service
+ **/
+ S3StatusErrorAccessDenied ,
+ S3StatusErrorAccountProblem ,
+ S3StatusErrorAmbiguousGrantByEmailAddress ,
+ S3StatusErrorBadDigest ,
+ S3StatusErrorBucketAlreadyExists ,
+ S3StatusErrorBucketAlreadyOwnedByYou ,
+ S3StatusErrorBucketNotEmpty ,
+ S3StatusErrorCredentialsNotSupported ,
+ S3StatusErrorCrossLocationLoggingProhibited ,
+ S3StatusErrorEntityTooSmall ,
+ S3StatusErrorEntityTooLarge ,
+ S3StatusErrorExpiredToken ,
+ S3StatusErrorIncompleteBody ,
+ S3StatusErrorIncorrectNumberOfFilesInPostRequest ,
+ S3StatusErrorInlineDataTooLarge ,
+ S3StatusErrorInternalError ,
+ S3StatusErrorInvalidAccessKeyId ,
+ S3StatusErrorInvalidAddressingHeader ,
+ S3StatusErrorInvalidArgument ,
+ S3StatusErrorInvalidBucketName ,
+ S3StatusErrorInvalidDigest ,
+ S3StatusErrorInvalidLocationConstraint ,
+ S3StatusErrorInvalidPayer ,
+ S3StatusErrorInvalidPolicyDocument ,
+ S3StatusErrorInvalidRange ,
+ S3StatusErrorInvalidSecurity ,
+ S3StatusErrorInvalidSOAPRequest ,
+ S3StatusErrorInvalidStorageClass ,
+ S3StatusErrorInvalidTargetBucketForLogging ,
+ S3StatusErrorInvalidToken ,
+ S3StatusErrorInvalidURI ,
+ S3StatusErrorKeyTooLong ,
+ S3StatusErrorMalformedACLError ,
+ S3StatusErrorMalformedXML ,
+ S3StatusErrorMaxMessageLengthExceeded ,
+ S3StatusErrorMaxPostPreDataLengthExceededError ,
+ S3StatusErrorMetadataTooLarge ,
+ S3StatusErrorMethodNotAllowed ,
+ S3StatusErrorMissingAttachment ,
+ S3StatusErrorMissingContentLength ,
+ S3StatusErrorMissingSecurityElement ,
+ S3StatusErrorMissingSecurityHeader ,
+ S3StatusErrorNoLoggingStatusForKey ,
+ S3StatusErrorNoSuchBucket ,
+ S3StatusErrorNoSuchKey ,
+ S3StatusErrorNotImplemented ,
+ S3StatusErrorNotSignedUp ,
+ S3StatusErrorOperationAborted ,
+ S3StatusErrorPermanentRedirect ,
+ S3StatusErrorPreconditionFailed ,
+ S3StatusErrorRedirect ,
+ S3StatusErrorRequestIsNotMultiPartContent ,
+ S3StatusErrorRequestTimeout ,
+ S3StatusErrorRequestTimeTooSkewed ,
+ S3StatusErrorRequestTorrentOfBucketError ,
+ S3StatusErrorSignatureDoesNotMatch ,
+ S3StatusErrorSlowDown ,
+ S3StatusErrorTemporaryRedirect ,
+ S3StatusErrorTokenRefreshRequired ,
+ S3StatusErrorTooManyBuckets ,
+ S3StatusErrorUnexpectedContent ,
+ S3StatusErrorUnresolvableGrantByEmailAddress ,
+ S3StatusErrorUserKeyMustBeSpecified ,
+ S3StatusErrorUnknown ,
+
+ /**
+ * The following are HTTP errors returned by S3 without enough detail to
+ * distinguish any of the above S3StatusError conditions
+ **/
+ S3StatusHttpErrorMovedTemporarily ,
+ S3StatusHttpErrorBadRequest ,
+ S3StatusHttpErrorForbidden ,
+ S3StatusHttpErrorNotFound ,
+ S3StatusHttpErrorConflict ,
+ S3StatusHttpErrorUnknown
+} S3Status;
+
+
+/**
+ * S3Protocol represents a protocol that may be used for communicating a
+ * request to the Amazon S3 service.
+ *
+ * In general, HTTPS is greatly preferred (and should be the default of any
+ * application using libs3) because it protects any data being sent to or
+ * from S3 using strong encryption. However, HTTPS is much more CPU intensive
+ * than HTTP, and if the caller is absolutely certain that it is OK for the
+ * data to be viewable by anyone in transit, then HTTP can be used.
+ **/
+typedef enum
+{
+ S3ProtocolHTTPS = 0,
+ S3ProtocolHTTP = 1
+} S3Protocol;
+
+
+/**
+ * S3UriStyle defines the form that an Amazon S3 URI identifying a bucket or
+ * object can take. They are of these forms:
+ *
+ * Virtual Host: ${protocol}://${bucket}.s3.amazonaws.com/[${key}]
+ * Path: ${protocol}://s3.amazonaws.com/${bucket}/[${key}]
+ *
+ * It is generally better to use the Virual Host URI form, because it ensures
+ * that the bucket name used is compatible with normal HTTP GETs and POSTs of
+ * data to/from the bucket. However, if DNS lookups for the bucket are too
+ * slow or unreliable for some reason, Path URI form may be used.
+ **/
+typedef enum
+{
+ S3UriStyleVirtualHost = 0,
+ S3UriStylePath = 1
+} S3UriStyle;
+
+
+/**
+ * S3GranteeType defines the type of Grantee used in an S3 ACL Grant.
+ * Amazon Customer By Email - identifies the Grantee using their Amazon S3
+ * account email address
+ * Canonical User - identifies the Grantee by S3 User ID and Display Name,
+ * which can only be obtained by making requests to S3, for example, by
+ * listing owned buckets
+ * All AWS Users - identifies all authenticated AWS users
+ * All Users - identifies all users
+ * Log Delivery - identifies the Amazon group responsible for writing
+ * server access logs into buckets
+ **/
+typedef enum
+{
+ S3GranteeTypeAmazonCustomerByEmail = 0,
+ S3GranteeTypeCanonicalUser = 1,
+ S3GranteeTypeAllAwsUsers = 2,
+ S3GranteeTypeAllUsers = 3,
+ S3GranteeTypeLogDelivery = 4
+} S3GranteeType;
+
+
+/**
+ * This is an individual permission granted to a grantee in an S3 ACL Grant.
+ * Read permission gives the Grantee the permission to list the bucket, or
+ * read the object or its metadata
+ * Write permission gives the Grantee the permission to create, overwrite, or
+ * delete any object in the bucket, and is not supported for objects
+ * ReadACP permission gives the Grantee the permission to read the ACP for
+ * the bucket or object; the owner of the bucket or object always has
+ * this permission implicitly
+ * WriteACP permission gives the Grantee the permission to overwrite the ACP
+ * for the bucket or object; the owner of the bucket or object always has
+ * this permission implicitly
+ * FullControl permission gives the Grantee all permissions specified by the
+ * Read, Write, ReadACP, and WriteACP permissions
+ **/
+typedef enum
+{
+ S3PermissionRead = 0,
+ S3PermissionWrite = 1,
+ S3PermissionReadACP = 2,
+ S3PermissionWriteACP = 3,
+ S3PermissionFullControl = 4
+} S3Permission;
+
+
+/**
+ * S3CannedAcl is an ACL that can be specified when an object is created or
+ * updated. Each canned ACL has a predefined value when expanded to a full
+ * set of S3 ACL Grants.
+ * Private canned ACL gives the owner FULL_CONTROL and no other permissions
+ * are issued
+ * Public Read canned ACL gives the owner FULL_CONTROL and all users Read
+ * permission
+ * Public Read Write canned ACL gives the owner FULL_CONTROL and all users
+ * Read and Write permission
+ * AuthenticatedRead canned ACL gives the owner FULL_CONTROL and authenticated
+ * S3 users Read permission
+ **/
+typedef enum
+{
+ S3CannedAclPrivate = 0, /* private */
+ S3CannedAclPublicRead = 1, /* public-read */
+ S3CannedAclPublicReadWrite = 2, /* public-read-write */
+ S3CannedAclAuthenticatedRead = 3 /* authenticated-read */
+} S3CannedAcl;
+
+
+/** **************************************************************************
+ * Data Types
+ ************************************************************************** **/
+
+/**
+ * An S3RequestContext manages multiple S3 requests simultaneously; see the
+ * S3_XXX_request_context functions below for details
+ **/
+typedef struct S3RequestContext S3RequestContext;
+
+
+/**
+ * S3NameValue represents a single Name - Value pair, used to represent either
+ * S3 metadata associated with a key, or S3 error details.
+ **/
+typedef struct S3NameValue
+{
+ /**
+ * The name part of the Name - Value pair
+ **/
+ const char *name;
+
+ /**
+ * The value part of the Name - Value pair
+ **/
+ const char *value;
+} S3NameValue;
+
+
+/**
+ * S3ResponseProperties is passed to the properties callback function which is
+ * called when the complete response properties have been received. Some of
+ * the fields of this structure are optional and may not be provided in the
+ * response, and some will always be provided in the response.
+ **/
+typedef struct S3ResponseProperties
+{
+ /**
+ * This optional field identifies the request ID and may be used when
+ * reporting problems to Amazon.
+ **/
+ const char *requestId;
+
+ /**
+ * This optional field identifies the request ID and may be used when
+ * reporting problems to Amazon.
+ **/
+ const char *requestId2;
+
+ /**
+ * This optional field is the content type of the data which is returned
+ * by the request. If not provided, the default can be assumed to be
+ * "binary/octet-stream".
+ **/
+ const char *contentType;
+
+ /**
+ * This optional field is the content length of the data which is returned
+ * in the response. A negative value means that this value was not
+ * provided in the response. A value of 0 means that there is no content
+ * provided. A positive value gives the number of bytes in the content of
+ * the response.
+ **/
+ uint64_t contentLength;
+
+ /**
+ * This optional field names the server which serviced the request.
+ **/
+ const char *server;
+
+ /**
+ * This optional field provides a string identifying the unique contents
+ * of the resource identified by the request, such that the contents can
+ * be assumed not to be changed if the same eTag is returned at a later
+ * time decribing the same resource. This is an MD5 sum of the contents.
+ **/
+ const char *eTag;
+
+ /**
+ * This optional field provides the last modified time, relative to the
+ * Unix epoch, of the contents. If this value is < 0, then the last
+ * modified time was not provided in the response. If this value is >= 0,
+ * then the last modified date of the contents are available as a number
+ * of seconds since the UNIX epoch.
+ *
+ **/
+ int64_t lastModified;
+
+ /**
+ * This is the number of user-provided meta data associated with the
+ * resource.
+ **/
+ int metaDataCount;
+
+ /**
+ * These are the meta data associated with the resource. In each case,
+ * the name will not include any S3-specific header prefixes
+ * (i.e. x-amz-meta- will have been removed from the beginning), and
+ * leading and trailing whitespace will have been stripped from the value.
+ **/
+ const S3NameValue *metaData;
+} S3ResponseProperties;
+
+
+/**
+ * S3AclGrant identifies a single grant in the ACL for a bucket or object. An
+ * ACL is composed of any number of grants, which specify a grantee and the
+ * permissions given to that grantee. S3 does not normalize ACLs in any way,
+ * so a redundant ACL specification will lead to a redundant ACL stored in S3.
+ **/
+typedef struct S3AclGrant
+{
+ /**
+ * The granteeType gives the type of grantee specified by this grant.
+ **/
+ S3GranteeType granteeType;
+ /**
+ * The identifier of the grantee that is set is determined by the
+ * granteeType:
+ *
+ * S3GranteeTypeAmazonCustomerByEmail - amazonCustomerByEmail.emailAddress
+ * S3GranteeTypeCanonicalUser - canonicalUser.id, canonicalUser.displayName
+ * S3GranteeTypeAllAwsUsers - none
+ * S3GranteeTypeAllUsers - none
+ **/
+ union
+ {
+ /**
+ * This structure is used iff the granteeType is
+ * S3GranteeTypeAmazonCustomerByEmail.
+ **/
+ struct
+ {
+ /**
+ * This is the email address of the Amazon Customer being granted
+ * permissions by this S3AclGrant.
+ **/
+ char emailAddress[S3_MAX_GRANTEE_EMAIL_ADDRESS_SIZE];
+ } amazonCustomerByEmail;
+ /**
+ * This structure is used iff the granteeType is
+ * S3GranteeTypeCanonicalUser.
+ **/
+ struct
+ {
+ /**
+ * This is the CanonicalUser ID of the grantee
+ **/
+ char id[S3_MAX_GRANTEE_USER_ID_SIZE];
+ /**
+ * This is the display name of the grantee
+ **/
+ char displayName[S3_MAX_GRANTEE_DISPLAY_NAME_SIZE];
+ } canonicalUser;
+ } grantee;
+ /**
+ * This is the S3Permission to be granted to the grantee
+ **/
+ S3Permission permission;
+} S3AclGrant;
+
+
+/**
+ * A context for working with objects within a bucket. A bucket context holds
+ * all information necessary for working with a bucket, and may be used
+ * repeatedly over many consecutive (or simultaneous) calls into libs3 bucket
+ * operation functions.
+ **/
+typedef struct S3BucketContext
+{
+ /**
+ * The name of the bucket to use in the bucket context
+ **/
+ const char *bucketName;
+
+ /**
+ * The protocol to use when accessing the bucket
+ **/
+ S3Protocol protocol;
+
+ /**
+ * The URI style to use for all URIs sent to Amazon S3 while working with
+ * this bucket context
+ **/
+ S3UriStyle uriStyle;
+
+ /**
+ * The Amazon Access Key ID to use for access to the bucket
+ **/
+ const char *accessKeyId;
+
+ /**
+ * The Amazon Secret Access Key to use for access to the bucket
+ **/
+ const char *secretAccessKey;
+} S3BucketContext;
+
+
+/**
+ * This is a single entry supplied to the list bucket callback by a call to
+ * S3_list_bucket. It identifies a single matching key from the list
+ * operation.
+ **/
+typedef struct S3ListBucketContent
+{
+ /**
+ * This is the next key in the list bucket results.
+ **/
+ const char *key;
+
+ /**
+ * This is the number of seconds since UNIX epoch of the last modified
+ * date of the object identified by the key.
+ **/
+ int64_t lastModified;
+
+ /**
+ * This gives a tag which gives a signature of the contents of the object,
+ * which is the MD5 of the contents of the object.
+ **/
+ const char *eTag;
+
+ /**
+ * This is the size of the object in bytes.
+ **/
+ uint64_t size;
+
+ /**
+ * This is the ID of the owner of the key; it is present only if access
+ * permissions allow it to be viewed.
+ **/
+ const char *ownerId;
+
+ /**
+ * This is the display name of the owner of the key; it is present only if
+ * access permissions allow it to be viewed.
+ **/
+ const char *ownerDisplayName;
+} S3ListBucketContent;
+
+
+/**
+ * S3PutProperties is the set of properties that may optionally be set by the
+ * user when putting objects to S3. Each field of this structure is optional
+ * and may or may not be present.
+ **/
+typedef struct S3PutProperties
+{
+ /**
+ * If present, this is the Content-Type that should be associated with the
+ * object. If not provided, S3 defaults to "binary/octet-stream".
+ **/
+ const char *contentType;
+
+ /**
+ * If present, this provides the MD5 signature of the contents, and is
+ * used to validate the contents. This is highly recommended by Amazon
+ * but not required. Its format is as a base64-encoded MD5 sum.
+ **/
+ const char *md5;
+
+ /**
+ * If present, this gives a Cache-Control header string to be supplied to
+ * HTTP clients which download this
+ **/
+ const char *cacheControl;
+
+ /**
+ * If present, this gives the filename to save the downloaded file to,
+ * whenever the object is downloaded via a web browser. This is only
+ * relevent for objects which are intended to be shared to users via web
+ * browsers and which is additionally intended to be downloaded rather
+ * than viewed.
+ **/
+ const char *contentDispositionFilename;
+
+ /**
+ * If present, this identifies the content encoding of the object. This
+ * is only applicable to encoded (usually, compressed) content, and only
+ * relevent if the object is intended to be downloaded via a browser.
+ **/
+ const char *contentEncoding;
+
+ /**
+ * If >= 0, this gives an expiration date for the content. This
+ * information is typically only delivered to users who download the
+ * content via a web browser.
+ **/
+ int64_t expires;
+
+ /**
+ * This identifies the "canned ACL" that should be used for this object.
+ * The default (0) gives only the owner of the object access to it.
+ **/
+ S3CannedAcl cannedAcl;
+
+ /**
+ * This is the number of values in the metaData field.
+ **/
+ int metaDataCount;
+
+ /**
+ * These are the meta data to pass to S3. In each case, the name part of
+ * the Name - Value pair should not include any special S3 HTTP header
+ * prefix (i.e., should be of the form 'foo', NOT 'x-amz-meta-foo').
+ **/
+ const S3NameValue *metaData;
+} S3PutProperties;
+
+
+/**
+ * S3GetConditions is used for the get_object operation, and specifies
+ * conditions which the object must meet in order to be successfully returned.
+ **/
+typedef struct S3GetConditions
+{
+ /**
+ * The request will be processed if the Last-Modification header of the
+ * object is greater than or equal to this value, specified as a number of
+ * seconds since Unix epoch. If this value is less than zero, it will not
+ * be used in the conditional.
+ **/
+ int64_t ifModifiedSince;
+
+ /**
+ * The request will be processed if the Last-Modification header of the
+ * object is less than this value, specified as a number of seconds since
+ * Unix epoch. If this value is less than zero, it will not be used in
+ * the conditional.
+ **/
+ int64_t ifNotModifiedSince;
+
+ /**
+ * If non-NULL, this gives an eTag header value which the object must
+ * match in order to be returned. Note that altough the eTag is simply an
+ * MD5, this must be presented in the S3 eTag form, which typically
+ * includes double-quotes.
+ **/
+ const char *ifMatchETag;
+
+ /**
+ * If non-NULL, this gives an eTag header value which the object must not
+ * match in order to be returned. Note that altough the eTag is simply an
+ * MD5, this must be presented in the S3 eTag form, which typically
+ * includes double-quotes.
+ **/
+ const char *ifNotMatchETag;
+} S3GetConditions;
+
+
+/**
+ * S3ErrorDetails provides detailed information describing an S3 error. This
+ * is only presented when the error is an S3-generated error (i.e. one of the
+ * S3StatusErrorXXX values).
+ **/
+typedef struct S3ErrorDetails
+{
+ /**
+ * This is the human-readable message that Amazon supplied describing the
+ * error
+ **/
+ const char *message;
+
+ /**
+ * This identifies the resource for which the error occurred
+ **/
+ const char *resource;
+
+ /**
+ * This gives human-readable further details describing the specifics of
+ * this error
+ **/
+ const char *furtherDetails;
+
+ /**
+ * This gives the number of S3NameValue pairs present in the extraDetails
+ * array
+ **/
+ int extraDetailsCount;
+
+ /**
+ * S3 can provide extra details in a freeform Name - Value pair format.
+ * Each error can have any number of these, and this array provides these
+ * additional extra details.
+ **/
+ S3NameValue *extraDetails;
+} S3ErrorDetails;
+
+
+/** **************************************************************************
+ * Callback Signatures
+ ************************************************************************** **/
+
+/**
+ * This callback is made whenever the response properties become available for
+ * any request.
+ *
+ * @param properties are the properties that are available from the response
+ * @param callbackData is the callback data as specified when the request
+ * was issued.
+ * @return S3StatusOK to continue processing the request, anything else to
+ * immediately abort the request with a status which will be
+ * passed to the S3ResponseCompleteCallback for this request.
+ * Typically, this will return either S3StatusOK or
+ * S3StatusAbortedByCallback.
+ **/
+typedef S3Status (S3ResponsePropertiesCallback)
+ (const S3ResponseProperties *properties, void *callbackData);
+
+
+/**
+ * This callback is made when the response has been completely received, or an
+ * error has occurred which has prematurely aborted the request, or one of the
+ * other user-supplied callbacks returned a value intended to abort the
+ * request. This callback is always made for every request, as the very last
+ * callback made for that request.
+ *
+ * @param status gives the overall status of the response, indicating success
+ * or failure; use S3_status_is_retryable() as a simple way to detect
+ * whether or not the status indicates that the request failed but may
+ * be retried.
+ * @param errorDetails if non-NULL, gives details as returned by the S3
+ * service, describing the error
+ * @param callbackData is the callback data as specified when the request
+ * was issued.
+ **/
+typedef void (S3ResponseCompleteCallback)(S3Status status,
+ const S3ErrorDetails *errorDetails,
+ void *callbackData);
+
+
+/**
+ * This callback is made for each bucket resulting from a list service
+ * operation.
+ *
+ * @param ownerId is the ID of the owner of the bucket
+ * @param ownerDisplayName is the owner display name of the owner of the bucket
+ * @param bucketName is the name of the bucket
+ * @param creationDateSeconds if < 0 indicates that no creation date was
+ * supplied for the bucket; if >= 0 indicates the number of seconds
+ * since UNIX Epoch of the creation date of the bucket
+ * @param callbackData is the callback data as specified when the request
+ * was issued.
+ * @return S3StatusOK to continue processing the request, anything else to
+ * immediately abort the request with a status which will be
+ * passed to the S3ResponseCompleteCallback for this request.
+ * Typically, this will return either S3StatusOK or
+ * S3StatusAbortedByCallback.
+ **/
+typedef S3Status (S3ListServiceCallback)(const char *ownerId,
+ const char *ownerDisplayName,
+ const char *bucketName,
+ int64_t creationDateSeconds,
+ void *callbackData);
+
+
+/**
+ * This callback is made repeatedly as a list bucket operation progresses.
+ * The contents reported via this callback are only reported once per list
+ * bucket operation, but multiple calls to this callback may be necessary to
+ * report all items resulting from the list bucket operation.
+ *
+ * @param isTruncated is true if the list bucket request was truncated by the
+ * S3 service, in which case the remainder of the list may be obtained
+ * by querying again using the Marker parameter to start the query
+ * after this set of results
+ * @param nextMarker if present, gives the largest (alphabetically) key
+ * returned in the response, which, if isTruncated is true, may be used
+ * as the marker in a subsequent list buckets operation to continue
+ * listing
+ * @param contentsCount is the number of ListBucketContent structures in the
+ * contents parameter
+ * @param contents is an array of ListBucketContent structures, each one
+ * describing an object in the bucket
+ * @param commonPrefixesCount is the number of common prefixes strings in the
+ * commonPrefixes parameter
+ * @param commonPrefixes is an array of strings, each specifing one of the
+ * common prefixes as returned by S3
+ * @param callbackData is the callback data as specified when the request
+ * was issued.
+ * @return S3StatusOK to continue processing the request, anything else to
+ * immediately abort the request with a status which will be
+ * passed to the S3ResponseCompleteCallback for this request.
+ * Typically, this will return either S3StatusOK or
+ * S3StatusAbortedByCallback.
+ **/
+typedef S3Status (S3ListBucketCallback)(int isTruncated,
+ const char *nextMarker,
+ int contentsCount,
+ const S3ListBucketContent *contents,
+ int commonPrefixesCount,
+ const char **commonPrefixes,
+ void *callbackData);
+
+
+/**
+ * This callback is made during a put object operation, to obtain the next
+ * chunk of data to put to the S3 service as the contents of the object. This
+ * callback is made repeatedly, each time acquiring the next chunk of data to
+ * write to the service, until a negative or 0 value is returned.
+ *
+ * @param bufferSize gives the maximum number of bytes that may be written
+ * into the buffer parameter by this callback
+ * @param buffer gives the buffer to fill with at most bufferSize bytes of
+ * data as the next chunk of data to send to S3 as the contents of this
+ * object
+ * @param callbackData is the callback data as specified when the request
+ * was issued.
+ * @return < 0 to abort the request with the S3StatusAbortedByCallback, which
+ * will be pased to the response complete callback for this request, or
+ * 0 to indicate the end of data, or > 0 to identify the number of
+ * bytes that were written into the buffer by this callback
+ **/
+typedef int (S3PutObjectDataCallback)(int bufferSize, char *buffer,
+ void *callbackData);
+
+
+/**
+ * This callback is made during a get object operation, to provide the next
+ * chunk of data available from the S3 service constituting the contents of
+ * the object being fetched. This callback is made repeatedly, each time
+ * providing the next chunk of data read, until the complete object contents
+ * have been passed through the callback in this way, or the callback
+ * returns an error status.
+ *
+ * @param bufferSize gives the number of bytes in buffer
+ * @param buffer is the data being passed into the callback
+ * @param callbackData is the callback data as specified when the request
+ * was issued.
+ * @return S3StatusOK to continue processing the request, anything else to
+ * immediately abort the request with a status which will be
+ * passed to the S3ResponseCompleteCallback for this request.
+ * Typically, this will return either S3StatusOK or
+ * S3StatusAbortedByCallback.
+ **/
+typedef S3Status (S3GetObjectDataCallback)(int bufferSize, const char *buffer,
+ void *callbackData);
+
+
+/** **************************************************************************
+ * Callback Structures
+ ************************************************************************** **/
+
+
+/**
+ * An S3ResponseHandler defines the callbacks which are made for any
+ * request.
+ **/
+typedef struct S3ResponseHandler
+{
+ /**
+ * The propertiesCallback is made when the response properties have
+ * successfully been returned from S3. This function may not be called
+ * if the response properties were not successfully returned from S3.
+ **/
+ S3ResponsePropertiesCallback *propertiesCallback;
+
+ /**
+ * The completeCallback is always called for every request made to S3,
+ * regardless of the outcome of the request. It provides the status of
+ * the request upon its completion, as well as extra error details in the
+ * event of an S3 error.
+ **/
+ S3ResponseCompleteCallback *completeCallback;
+} S3ResponseHandler;
+
+
+/**
+ * An S3ListServiceHandler defines the callbacks which are made for
+ * list_service requests.
+ **/
+typedef struct S3ListServiceHandler
+{
+ /**
+ * responseHandler provides the properties and complete callback
+ **/
+ S3ResponseHandler responseHandler;
+
+ /**
+ * The listServiceCallback is called as items are reported back from S3 as
+ * responses to the request
+ **/
+ S3ListServiceCallback *listServiceCallback;
+} S3ListServiceHandler;
+
+
+/**
+ * An S3ListBucketHandler defines the callbacks which are made for
+ * list_bucket requests.
+ **/
+typedef struct S3ListBucketHandler
+{
+ /**
+ * responseHandler provides the properties and complete callback
+ **/
+ S3ResponseHandler responseHandler;
+
+ /**
+ * The listBucketCallback is called as items are reported back from S3 as
+ * responses to the request. This may be called more than one time per
+ * list bucket request, each time providing more items from the list
+ * operation.
+ **/
+ S3ListBucketCallback *listBucketCallback;
+} S3ListBucketHandler;
+
+
+/**
+ * An S3PutObjectHandler defines the callbacks which are made for
+ * put_object requests.
+ **/
+typedef struct S3PutObjectHandler
+{
+ /**
+ * responseHandler provides the properties and complete callback
+ **/
+ S3ResponseHandler responseHandler;
+
+ /**
+ * The putObjectDataCallback is called to acquire data to send to S3 as
+ * the contents of the put_object request. It is made repeatedly until it
+ * returns a negative number (indicating that the request should be
+ * aborted), or 0 (indicating that all data has been supplied).
+ **/
+ S3PutObjectDataCallback *putObjectDataCallback;
+} S3PutObjectHandler;
+
+
+/**
+ * An S3GetObjectHandler defines the callbacks which are made for
+ * get_object requests.
+ **/
+typedef struct S3GetObjectHandler
+{
+ /**
+ * responseHandler provides the properties and complete callback
+ **/
+ S3ResponseHandler responseHandler;
+
+ /**
+ * The getObjectDataCallback is called as data is read from S3 as the
+ * contents of the object being read in the get_object request. It is
+ * called repeatedly until there is no more data provided in the request,
+ * or until the callback returns an error status indicating that the
+ * request should be aborted.
+ **/
+ S3GetObjectDataCallback *getObjectDataCallback;
+} S3GetObjectHandler;
+
+
+/** **************************************************************************
+ * General Library Functions
+ ************************************************************************** **/
+
+/**
+ * Initializes libs3 for use. This function must be called before any other
+ * libs3 function is called. It may be called multiple times, with the same
+ * effect as calling it once, as long as S3_deinitialize() is called an
+ * equal number of times when the program has finished. This function is NOT
+ * thread-safe and must only be called by one thread at a time.
+ *
+ * @param userAgentInfo is a string that will be included in the User-Agent
+ * header of every request made to the S3 service. You may provide
+ * NULL or the empty string if you don't care about this. The value
+ * will not be copied by this function and must remain unaltered by the
+ * caller until S3_deinitialize() is called.
+ * @param flags is a bitmask of some combination of S3_INIT_XXX flag, or
+ * S3_INIT_ALL, indicating which of the libraries that libs3 depends
+ * upon should be initialized by S3_initialize(). Only if your program
+ * initializes one of these dependency libraries itself should anything
+ * other than S3_INIT_ALL be passed in for this bitmask.
+ *
+ * You should pass S3_INIT_WINSOCK if and only if your application does
+ * not initialize winsock elsewhere. On non-Microsoft Windows
+ * platforms it has no effect.
+ *
+ * As a convenience, the macro S3_INIT_ALL is provided, which will do
+ * all necessary initialization; however, be warned that things may
+ * break if your application re-initializes the dependent libraries
+ * later.
+ * @return One of:
+ * S3StatusOK on success
+ * S3StatusInternalError if dependent libraries could not be
+ * initialized
+ * S3StatusOutOfMemory on failure due to out of memory
+ **/
+S3Status S3_initialize(const char *userAgentInfo, int flags);
+
+
+/**
+ * Must be called once per program for each call to libs3_initialize(). After
+ * this call is complete, no libs3 function may be called except
+ * S3_initialize().
+ **/
+void S3_deinitialize();
+
+
+/**
+ * Returns a string with the textual name of an S3Status code
+ *
+ * @param status is S3Status code for which the textual name will be returned
+ * @return a string with the textual name of an S3Status code
+ **/
+const char *S3_get_status_name(S3Status status);
+
+
+/**
+ * This function may be used to validate an S3 bucket name as being in the
+ * correct form for use with the S3 service. Amazon S3 limits the allowed
+ * characters in S3 bucket names, as well as imposing some additional rules on
+ * the length of bucket names and their structure. There are actually two
+ * limits; one for bucket names used only in path-style URIs, and a more
+ * strict limit used for bucket names used in virtual-host-style URIs. It is
+ * advisable to use only bucket names which meet the more strict requirements
+ * regardless of how the bucket expected to be used.
+ *
+ * This method does NOT validate that the bucket is available for use in the
+ * S3 service, so the return value of this function cannot be used to decide
+ * whether or not a bucket with the give name already exists in Amazon S3 or
+ * is accessible by the caller. It merely validates that the bucket name is
+ * valid for use with S3.
+ *
+ * @param bucketName is the bucket name to validate
+ * @param uriStyle gives the URI style to validate the bucket name against.
+ * It is advisable to always use S3UriStyleVirtuallHost.
+ * @return One of:
+ * S3StatusOK if the bucket name was validates successfully
+ * S3StatusInvalidBucketNameTooLong if the bucket name exceeded the
+ * length limitation for the URI style, which is 255 bytes for
+ * path style URIs and 63 bytes for virtual host type URIs
+ * S3StatusInvalidBucketNameTooShort if the bucket name is less than
+ * 3 characters
+ * S3StatusInvalidBucketNameFirstCharacter if the bucket name as an
+ * invalid first character, which is anything other than
+ * an alphanumeric character
+ * S3StatusInvalidBucketNameCharacterSequence if the bucket name
+ * includes an invalid character sequence, which for virtual host
+ * style buckets is ".-" or "-."
+ * S3StatusInvalidBucketNameCharacter if the bucket name includes an
+ * invalid character, which is anything other than alphanumeric,
+ * '-', '.', or for path style URIs only, '_'.
+ * S3StatusInvalidBucketNameDotQuadNotation if the bucket name is in
+ * dot-quad notation, i.e. the form of an IP address, which is
+ * not allowed by Amazon S3.
+ **/
+S3Status S3_validate_bucket_name(const char *bucketName, S3UriStyle uriStyle);
+
+
+/**
+ * Converts an XML representation of an ACL to a libs3 structured
+ * representation. This method is not strictly necessary for working with
+ * ACLs using libs3, but may be convenient for users of the library who read
+ * ACLs from elsewhere in XML format and need to use these ACLs with libs3.
+ *
+ * @param aclXml is the XML representation of the ACL. This must be a
+ * zero-terminated character string.
+ * @param ownerId will be filled in with the Owner ID specified in the XML.
+ * At most MAX_GRANTEE_USER_ID_SIZE bytes will be stored at this
+ * location.
+ * @param ownerDisplayName will be filled in with the Owner Display Name
+ * specified in the XML. At most MAX_GRANTEE_DISPLAY_NAME_SIZE bytes
+ * will be stored at this location.
+ * @param aclGrantCountReturn returns the number of S3AclGrant structures
+ * returned in the aclGrantsReturned array
+ * @param aclGrants must be passed in as an array of at least S3_ACL_MAXCOUNT
+ * structures, and on return from this function, the first
+ * aclGrantCountReturn structures will be filled in with the ACLs
+ * represented by the input XML.
+ * @return One of:
+ * S3StatusOK on successful conversion of the ACL
+ * S3StatusInternalError on internal error representing a bug in the
+ * libs3 library
+ * S3StatusXmlParseFailure if the XML document was malformed
+ **/
+S3Status S3_convert_acl(char *aclXml, char *ownerId, char *ownerDisplayName,
+ int *aclGrantCountReturn, S3AclGrant *aclGrants);
+
+
+/**
+ * Returns nonzero if the status indicates that the request should be
+ * immediately retried, because the status indicates an error of a nature that
+ * is likely due to transient conditions on the local system or S3, such as
+ * network failures, or internal retryable errors reported by S3. Returns
+ * zero otherwise.
+ *
+ * @param status is the status to evaluate
+ * @return nonzero if the status indicates a retryable error, 0 otherwise
+ **/
+int S3_status_is_retryable(S3Status status);
+
+
+/** **************************************************************************
+ * Request Context Management Functions
+ ************************************************************************** **/
+
+/**
+ * An S3RequestContext allows muliple requests to be serviced by the same
+ * thread simultaneously. It is an optional parameter to all libs3 request
+ * functions, and if provided, the request is managed by the S3RequestContext;
+ * if not, the request is handled synchronously and is complete when the libs3
+ * request function has returned.
+ *
+ * @param requestContextReturn returns the newly-created S3RequestContext
+ * structure, which if successfully returned, must be destroyed via a
+ * call to S3_destroy_request_context when it is no longer needed. If
+ * an error status is returned from this function, then
+ * requestContextReturn will not have been filled in, and
+ * S3_destroy_request_context should not be called on it
+ * @return One of:
+ * S3StatusOK if the request context was successfully created
+ * S3StatusOutOfMemory if the request context could not be created due
+ * to an out of memory error
+ **/
+S3Status S3_create_request_context(S3RequestContext **requestContextReturn);
+
+
+/**
+ * Destroys an S3RequestContext which was created with
+ * S3_create_request_context. Any requests which are currently being
+ * processed by the S3RequestContext will immediately be aborted and their
+ * request completed callbacks made with the status S3StatusInterrupted.
+ *
+ * @param requestContext is the S3RequestContext to destroy
+ **/
+void S3_destroy_request_context(S3RequestContext *requestContext);
+
+
+/**
+ * Runs the S3RequestContext until all requests within it have completed,
+ * or until an error occurs.
+ *
+ * @param requestContext is the S3RequestContext to run until all requests
+ * within it have completed or until an error occurs
+ * @return One of:
+ * S3Status if all requests were successfully run to completion
+ * S3StatusInternalError if an internal error prevented the
+ * S3RequestContext from running one or more requests
+ * S3StatusOutOfMemory if requests could not be run to completion
+ * due to an out of memory error
+ **/
+S3Status S3_runall_request_context(S3RequestContext *requestContext);
+
+
+/**
+ * Does some processing of requests within the S3RequestContext. One or more
+ * requests may have callbacks made on them and may complete. This function
+ * processes any requests which have immediately available I/O, and will not
+ * block waiting for I/O on any request. This function would normally be used
+ * with S3_get_request_context_fdsets.
+ *
+ * @param requestContext is the S3RequestContext to process
+ * @param requestsRemainingReturn returns the number of requests remaining
+ * and not yet completed within the S3RequestContext after this
+ * function returns.
+ * @return One of:
+ * S3StatusOK if request processing proceeded without error
+ * S3StatusInternalError if an internal error prevented the
+ * S3RequestContext from running one or more requests
+ * S3StatusOutOfMemory if requests could not be processed due to
+ * an out of memory error
+ **/
+S3Status S3_runonce_request_context(S3RequestContext *requestContext,
+ int *requestsRemainingReturn);
+
+
+/**
+ * This function, in conjunction allows callers to manually manage a set of
+ * requests using an S3RequestContext. This function returns the set of file
+ * descriptors which the caller can watch (typically using select()), along
+ * with any other file descriptors of interest to the caller, and using
+ * whatever timeout (if any) the caller wishes, until one or more file
+ * descriptors in the returned sets become ready for I/O, at which point
+ * S3_runonce_request_context can be called to process requests with available
+ * I/O.
+ *
+ * @param requestContext is the S3RequestContext to get fd_sets from
+ * @param readFdSet is a pointer to an fd_set which will have all file
+ * descriptors to watch for read events for the requests in the
+ * S3RequestContext set into it upon return. Should be zero'd out
+ * (using FD_ZERO) before being passed into this function.
+ * @param writeFdSet is a pointer to an fd_set which will have all file
+ * descriptors to watch for write events for the requests in the
+ * S3RequestContext set into it upon return. Should be zero'd out
+ * (using FD_ZERO) before being passed into this function.
+ * @param exceptFdSet is a pointer to an fd_set which will have all file
+ * descriptors to watch for exception events for the requests in the
+ * S3RequestContext set into it upon return. Should be zero'd out
+ * (using FD_ZERO) before being passed into this function.
+ * @param maxFd returns the highest file descriptor set into any of the
+ * fd_sets, or -1 if no file descriptors were set
+ * @return One of:
+ * S3StatusOK if all fd_sets were successfully set
+ * S3StatusInternalError if an internal error prevented this function
+ * from completing successfully
+ **/
+S3Status S3_get_request_context_fdsets(S3RequestContext *requestContext,
+ fd_set *readFdSet, fd_set *writeFdSet,
+ fd_set *exceptFdSet, int *maxFd);
+
+
+/**
+ * This function returns the maximum number of milliseconds that the caller of
+ * S3_runonce_request_context should wait on the fdsets obtained via a call to
+ * S3_get_request_context_fdsets. In other words, this is essentially the
+ * select() timeout that needs to be used (shorter values are OK, but no
+ * longer than this) to ensure that internal timeout code of libs3 can work
+ * properly. This function should be called right before select() each time
+ * select() on the request_context fdsets are to be performed by the libs3
+ * user.
+ *
+ * @param requestContext is the S3RequestContext to get the timeout from
+ * @return the maximum number of milliseconds to select() on fdsets. Callers
+ * could wait a shorter time if they wish, but not longer.
+ **/
+int64_t S3_get_request_context_timeout(S3RequestContext *requestContext);
+
+
+/** **************************************************************************
+ * S3 Utility Functions
+ ************************************************************************** **/
+
+/**
+ * Generates an HTTP authenticated query string, which may then be used by
+ * a browser (or other web client) to issue the request. The request is
+ * implicitly a GET request; Amazon S3 is documented to only support this type
+ * of authenticated query string request.
+ *
+ * @param buffer is the output buffer for the authenticated query string.
+ * It must be at least S3_MAX_AUTHENTICATED_QUERY_STRING_SIZE bytes in
+ * length.
+ * @param bucketContext gives the bucket and associated parameters for the
+ * request to generate.
+ * @param key gives the key which the authenticated request will GET.
+ * @param expires gives the number of seconds since Unix epoch for the
+ * expiration date of the request; after this time, the request will
+ * no longer be valid. If this value is negative, the largest
+ * expiration date possible is used (currently, Jan 19, 2038).
+ * @param resource gives a sub-resource to be fetched for the request, or NULL
+ * for none. This should be of the form "?<resource>", i.e.
+ * "?torrent".
+ * @return One of:
+ * S3StatusUriTooLong if, due to an internal error, the generated URI
+ * is longer than S3_MAX_AUTHENTICATED_QUERY_STRING_SIZE bytes in
+ * length and thus will not fit into the supplied buffer
+ * S3StatusOK on success
+ **/
+S3Status S3_generate_authenticated_query_string
+ (char *buffer, const S3BucketContext *bucketContext,
+ const char *key, int64_t expires, const char *resource);
+
+
+/** **************************************************************************
+ * Service Functions
+ ************************************************************************** **/
+
+/**
+ * Lists all S3 buckets belonging to the access key id.
+ *
+ * @param protocol gives the protocol to use for this request
+ * @param accessKeyId gives the Amazon Access Key ID for which to list owned
+ * buckets
+ * @param secretAccessKey gives the Amazon Secret Access Key for which to list
+ * owned buckets
+ * @param requestContext if non-NULL, gives the S3RequestContext to add this
+ * request to, and does not perform the request immediately. If NULL,
+ * performs the request immediately and synchronously.
+ * @param handler gives the callbacks to call as the request is processed and
+ * completed
+ * @param callbackData will be passed in as the callbackData parameter to
+ * all callbacks for this request
+ **/
+void S3_list_service(S3Protocol protocol, const char *accessKeyId,
+ const char *secretAccessKey,
+ S3RequestContext *requestContext,
+ const S3ListServiceHandler *handler,
+ void *callbackData);
+
+
+/** **************************************************************************
+ * Bucket Functions
+ ************************************************************************** **/
+
+/**
+ * Tests the existence of an S3 bucket, additionally returning the bucket's
+ * location if it exists and is accessible.
+ *
+ * @param protocol gives the protocol to use for this request
+ * @param uriStyle gives the URI style to use for this request
+ * @param accessKeyId gives the Amazon Access Key ID for which to list owned
+ * buckets
+ * @param secretAccessKey gives the Amazon Secret Access Key for which to list
+ * owned buckets
+ * @param bucketName is the bucket name to test
+ * @param locationConstraintReturnSize gives the number of bytes in the
+ * locationConstraintReturn parameter
+ * @param locationConstraintReturn provides the location into which to write
+ * the name of the location constraint naming the geographic location
+ * of the S3 bucket. This must have at least as many characters in it
+ * as specified by locationConstraintReturn, and should start out
+ * NULL-terminated. On successful completion of this request, this
+ * will be set to the name of the geographic location of S3 bucket, or
+ * will be left as a zero-length string if no location was available.
+ * @param requestContext if non-NULL, gives the S3RequestContext to add this
+ * request to, and does not perform the request immediately. If NULL,
+ * performs the request immediately and synchronously.
+ * @param handler gives the callbacks to call as the request is processed and
+ * completed
+ * @param callbackData will be passed in as the callbackData parameter to
+ * all callbacks for this request
+ **/
+void S3_test_bucket(S3Protocol protocol, S3UriStyle uriStyle,
+ const char *accessKeyId, const char *secretAccessKey,
+ const char *bucketName, int locationConstraintReturnSize,
+ char *locationConstraintReturn,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData);
+
+
+/**
+ * Creates a new bucket.
+ *
+ * @param protocol gives the protocol to use for this request
+ * @param accessKeyId gives the Amazon Access Key ID for which to list owned
+ * buckets
+ * @param secretAccessKey gives the Amazon Secret Access Key for which to list
+ * owned buckets
+ * @param bucketName is the name of the bucket to be created
+ * @param cannedAcl gives the "REST canned ACL" to use for the created bucket
+ * @param locationConstraint if non-NULL, gives the geographic location for
+ * the bucket to create.
+ * @param requestContext if non-NULL, gives the S3RequestContext to add this
+ * request to, and does not perform the request immediately. If NULL,
+ * performs the request immediately and synchronously.
+ * @param handler gives the callbacks to call as the request is processed and
+ * completed
+ * @param callbackData will be passed in as the callbackData parameter to
+ * all callbacks for this request
+ **/
+void S3_create_bucket(S3Protocol protocol, const char *accessKeyId,
+ const char *secretAccessKey, const char *bucketName,
+ S3CannedAcl cannedAcl, const char *locationConstraint,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData);
+
+
+/**
+ * Deletes a bucket. The bucket must be empty, or the status
+ * S3StatusErrorBucketNotEmpty will result.
+ *
+ * @param protocol gives the protocol to use for this request
+ * @param uriStyle gives the URI style to use for this request
+ * @param accessKeyId gives the Amazon Access Key ID for which to list owned
+ * buckets
+ * @param secretAccessKey gives the Amazon Secret Access Key for which to list
+ * owned buckets
+ * @param bucketName is the name of the bucket to be deleted
+ * @param requestContext if non-NULL, gives the S3RequestContext to add this
+ * request to, and does not perform the request immediately. If NULL,
+ * performs the request immediately and synchronously.
+ * @param handler gives the callbacks to call as the request is processed and
+ * completed
+ * @param callbackData will be passed in as the callbackData parameter to
+ * all callbacks for this request
+ **/
+void S3_delete_bucket(S3Protocol protocol, S3UriStyle uriStyle,
+ const char *accessKeyId, const char *secretAccessKey,
+ const char *bucketName, S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData);
+
+
+/**
+ * Lists keys within a bucket.
+ *
+ * @param bucketContext gives the bucket and associated parameters for this
+ * request
+ * @param prefix if present, gives a prefix for matching keys
+ * @param marker if present, only keys occuring after this value will be
+ * listed
+ * @param delimiter if present, causes keys that contain the same string
+ * between the prefix and the first occurrence of the delimiter to be
+ * rolled up into a single result element
+ * @param maxkeys is the maximum number of keys to return
+ * @param requestContext if non-NULL, gives the S3RequestContext to add this
+ * request to, and does not perform the request immediately. If NULL,
+ * performs the request immediately and synchronously.
+ * @param handler gives the callbacks to call as the request is processed and
+ * completed
+ * @param callbackData will be passed in as the callbackData parameter to
+ * all callbacks for this request
+ **/
+void S3_list_bucket(const S3BucketContext *bucketContext,
+ const char *prefix, const char *marker,
+ const char *delimiter, int maxkeys,
+ S3RequestContext *requestContext,
+ const S3ListBucketHandler *handler, void *callbackData);
+
+
+/** **************************************************************************
+ * Object Functions
+ ************************************************************************** **/
+
+/**
+ * Puts object data to S3. This overwrites any existing object at that key;
+ * note that S3 currently only supports full-object upload. The data to
+ * upload will be acquired by calling the handler's putObjectDataCallback.
+ *
+ * @param bucketContext gives the bucket and associated parameters for this
+ * request
+ * @param key is the key of the object to put to
+ * @param contentLength is required and gives the total number of bytes that
+ * will be put
+ * @param putProperties optionally provides additional properties to apply to
+ * the object that is being put to
+ * @param requestContext if non-NULL, gives the S3RequestContext to add this
+ * request to, and does not perform the request immediately. If NULL,
+ * performs the request immediately and synchronously.
+ * @param handler gives the callbacks to call as the request is processed and
+ * completed
+ * @param callbackData will be passed in as the callbackData parameter to
+ * all callbacks for this request
+ **/
+void S3_put_object(const S3BucketContext *bucketContext, const char *key,
+ uint64_t contentLength,
+ const S3PutProperties *putProperties,
+ S3RequestContext *requestContext,
+ const S3PutObjectHandler *handler, void *callbackData);
+
+
+/**
+ * Copies an object from one location to another. The object may be copied
+ * back to itself, which is useful for replacing metadata without changing
+ * the object.
+ *
+ * @param bucketContext gives the source bucket and associated parameters for
+ * this request
+ * @param key is the source key
+ * @param destinationBucket gives the destination bucket into which to copy
+ * the object. If NULL, the source bucket will be used.
+ * @param destinationKey gives the destination key into which to copy the
+ * object. If NULL, the source key will be used.
+ * @param putProperties optionally provides properties to apply to the object
+ * that is being put to. If not supplied (i.e. NULL is passed in),
+ * then the copied object will retain the metadata of the copied
+ * object.
+ * @param lastModifiedReturn returns the last modified date of the copied
+ * object
+ * @param eTagReturnSize specifies the number of bytes provided in the
+ * eTagReturn buffer
+ * @param eTagReturn is a buffer into which the resulting eTag of the copied
+ * object will be written
+ * @param handler gives the callbacks to call as the request is processed and
+ * completed
+ * @param callbackData will be passed in as the callbackData parameter to
+ * all callbacks for this request
+ * @param requestContext if non-NULL, gives the S3RequestContext to add this
+ * request to, and does not perform the request immediately. If NULL,
+ * performs the request immediately and synchronously.
+ * @param handler gives the callbacks to call as the request is processed and
+ * completed
+ * @param callbackData will be passed in as the callbackData parameter to
+ * all callbacks for this request
+ **/
+void S3_copy_object(const S3BucketContext *bucketContext,
+ const char *key, const char *destinationBucket,
+ const char *destinationKey,
+ const S3PutProperties *putProperties,
+ int64_t *lastModifiedReturn, int eTagReturnSize,
+ char *eTagReturn, S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData);
+
+
+/**
+ * Gets an object from S3. The contents of the object are returned in the
+ * handler's getObjectDataCallback.
+ *
+ * @param bucketContext gives the bucket and associated parameters for this
+ * request
+ * @param key is the key of the object to get
+ * @param getConditions if non-NULL, gives a set of conditions which must be
+ * met in order for the request to succeed
+ * @param startByte gives the start byte for the byte range of the contents
+ * to be returned
+ * @param byteCount gives the number of bytes to return; a value of 0
+ * indicates that the contents up to the end should be returned
+ * @param requestContext if non-NULL, gives the S3RequestContext to add this
+ * request to, and does not perform the request immediately. If NULL,
+ * performs the request immediately and synchronously.
+ * @param handler gives the callbacks to call as the request is processed and
+ * completed
+ * @param callbackData will be passed in as the callbackData parameter to
+ * all callbacks for this request
+ **/
+void S3_get_object(const S3BucketContext *bucketContext, const char *key,
+ const S3GetConditions *getConditions,
+ uint64_t startByte, uint64_t byteCount,
+ S3RequestContext *requestContext,
+ const S3GetObjectHandler *handler, void *callbackData);
+
+
+/**
+ * Gets the response properties for the object, but not the object contents.
+ *
+ * @param bucketContext gives the bucket and associated parameters for this
+ * request
+ * @param key is the key of the object to get the properties of
+ * @param requestContext if non-NULL, gives the S3RequestContext to add this
+ * request to, and does not perform the request immediately. If NULL,
+ * performs the request immediately and synchronously.
+ * @param handler gives the callbacks to call as the request is processed and
+ * completed
+ * @param callbackData will be passed in as the callbackData parameter to
+ * all callbacks for this request
+ **/
+void S3_head_object(const S3BucketContext *bucketContext, const char *key,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData);
+
+/**
+ * Deletes an object from S3.
+ *
+ * @param bucketContext gives the bucket and associated parameters for this
+ * request
+ * @param key is the key of the object to delete
+ * @param requestContext if non-NULL, gives the S3RequestContext to add this
+ * request to, and does not perform the request immediately. If NULL,
+ * performs the request immediately and synchronously.
+ * @param handler gives the callbacks to call as the request is processed and
+ * completed
+ * @param callbackData will be passed in as the callbackData parameter to
+ * all callbacks for this request
+ **/
+void S3_delete_object(const S3BucketContext *bucketContext, const char *key,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData);
+
+
+/** **************************************************************************
+ * Access Control List Functions
+ ************************************************************************** **/
+
+/**
+ * Gets the ACL for the given bucket or object.
+ *
+ * @param bucketContext gives the bucket and associated parameters for this
+ * request
+ * @param key is the key of the object to get the ACL of; or NULL to get the
+ * ACL of the bucket
+ * @param ownerId must be supplied as a buffer of at least
+ * S3_MAX_GRANTEE_USER_ID_SIZE bytes, and will be filled in with the
+ * owner ID of the object/bucket
+ * @param ownerDisplayName must be supplied as a buffer of at least
+ * S3_MAX_GRANTEE_DISPLAY_NAME_SIZE bytes, and will be filled in with
+ * the display name of the object/bucket
+ * @param aclGrantCountReturn returns the number of S3AclGrant structures
+ * returned in the aclGrants parameter
+ * @param aclGrants must be passed in as an array of at least
+ * S3_MAX_ACL_GRANT_COUNT S3AclGrant structures, which will be filled
+ * in with the grant information for the ACL
+ * @param requestContext if non-NULL, gives the S3RequestContext to add this
+ * request to, and does not perform the request immediately. If NULL,
+ * performs the request immediately and synchronously.
+ * @param handler gives the callbacks to call as the request is processed and
+ * completed
+ * @param callbackData will be passed in as the callbackData parameter to
+ * all callbacks for this request
+ **/
+void S3_get_acl(const S3BucketContext *bucketContext, const char *key,
+ char *ownerId, char *ownerDisplayName,
+ int *aclGrantCountReturn, S3AclGrant *aclGrants,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData);
+
+
+/**
+ * Sets the ACL for the given bucket or object.
+ *
+ * @param bucketContext gives the bucket and associated parameters for this
+ * request
+ * @param key is the key of the object to set the ACL for; or NULL to set the
+ * ACL for the bucket
+ * @param ownerId is the owner ID of the object/bucket. Unfortunately, S3
+ * requires this to be valid and thus it must have been fetched by a
+ * previous S3 request, such as a list_buckets request.
+ * @param ownerDisplayName is the owner display name of the object/bucket.
+ * Unfortunately, S3 requires this to be valid and thus it must have
+ * been fetched by a previous S3 request, such as a list_buckets
+ * request.
+ * @param aclGrantCount is the number of ACL grants to set for the
+ * object/bucket
+ * @param aclGrants are the ACL grants to set for the object/bucket
+ * @param requestContext if non-NULL, gives the S3RequestContext to add this
+ * request to, and does not perform the request immediately. If NULL,
+ * performs the request immediately and synchronously.
+ * @param handler gives the callbacks to call as the request is processed and
+ * completed
+ * @param callbackData will be passed in as the callbackData parameter to
+ * all callbacks for this request
+ **/
+void S3_set_acl(const S3BucketContext *bucketContext, const char *key,
+ const char *ownerId, const char *ownerDisplayName,
+ int aclGrantCount, const S3AclGrant *aclGrants,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData);
+
+
+/** **************************************************************************
+ * Server Access Log Functions
+ ************************************************************************** **/
+
+/**
+ * Gets the service access logging settings for a bucket. The service access
+ * logging settings specify whether or not the S3 service will write service
+ * access logs for requests made for the given bucket, and if so, several
+ * settings controlling how these logs will be written.
+ *
+ * @param bucketContext gives the bucket and associated parameters for this
+ * request; this is the bucket for which service access logging is
+ * being requested
+ * @param targetBucketReturn must be passed in as a buffer of at least
+ * (S3_MAX_BUCKET_NAME_SIZE + 1) bytes in length, and will be filled
+ * in with the target bucket name for access logging for the given
+ * bucket, which is the bucket into which access logs for the specified
+ * bucket will be written. This is returned as an empty string if
+ * service access logging is not enabled for the given bucket.
+ * @param targetPrefixReturn must be passed in as a buffer of at least
+ * (S3_MAX_KEY_SIZE + 1) bytes in length, and will be filled in
+ * with the key prefix for server access logs for the given bucket,
+ * or the empty string if no such prefix is specified.
+ * @param aclGrantCountReturn returns the number of ACL grants that are
+ * associated with the server access logging for the given bucket.
+ * @param aclGrants must be passed in as an array of at least
+ * S3_MAX_ACL_GRANT_COUNT S3AclGrant structures, and these will be
+ * filled in with the target grants associated with the server access
+ * logging for the given bucket, whose number is returned in the
+ * aclGrantCountReturn parameter. These grants will be applied to the
+ * ACL of any server access logging log files generated by the S3
+ * service for the given bucket.
+ * @param requestContext if non-NULL, gives the S3RequestContext to add this
+ * request to, and does not perform the request immediately. If NULL,
+ * performs the request immediately and synchronously.
+ * @param handler gives the callbacks to call as the request is processed and
+ * completed
+ * @param callbackData will be passed in as the callbackData parameter to
+ * all callbacks for this request
+ **/
+void S3_get_server_access_logging(const S3BucketContext *bucketContext,
+ char *targetBucketReturn,
+ char *targetPrefixReturn,
+ int *aclGrantCountReturn,
+ S3AclGrant *aclGrants,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler,
+ void *callbackData);
+
+
+/**
+ * Sets the service access logging settings for a bucket. The service access
+ * logging settings specify whether or not the S3 service will write service
+ * access logs for requests made for the given bucket, and if so, several
+ * settings controlling how these logs will be written.
+ *
+ * @param bucketContext gives the bucket and associated parameters for this
+ * request; this is the bucket for which service access logging is
+ * being set
+ * @param targetBucket gives the target bucket name for access logging for the
+ * given bucket, which is the bucket into which access logs for the
+ * specified bucket will be written.
+ * @param targetPrefix is an option parameter which specifies the key prefix
+ * for server access logs for the given bucket, or NULL if no such
+ * prefix is to be used.
+ * @param aclGrantCount specifies the number of ACL grants that are to be
+ * associated with the server access logging for the given bucket.
+ * @param aclGrants is as an array of S3AclGrant structures, whose number is
+ * given by the aclGrantCount parameter. These grants will be applied
+ * to the ACL of any server access logging log files generated by the
+ * S3 service for the given bucket.
+ * @param requestContext if non-NULL, gives the S3RequestContext to add this
+ * request to, and does not perform the request immediately. If NULL,
+ * performs the request immediately and synchronously.
+ * @param handler gives the callbacks to call as the request is processed and
+ * completed
+ * @param callbackData will be passed in as the callbackData parameter to
+ * all callbacks for this request
+ **/
+void S3_set_server_access_logging(const S3BucketContext *bucketContext,
+ const char *targetBucket,
+ const char *targetPrefix, int aclGrantCount,
+ const S3AclGrant *aclGrants,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler,
+ void *callbackData);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* LIBS3_H */
--- /dev/null
+/** **************************************************************************
+ * pthread.h
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#ifndef PTHREAD_H
+#define PTHREAD_H
+
+// This is a minimal implementation of pthreads on Windows, implementing just
+// the APIs needed by libs3
+
+unsigned long pthread_self();
+
+typedef struct
+{
+ CRITICAL_SECTION criticalSection;
+} pthread_mutex_t;
+
+int pthread_mutex_init(pthread_mutex_t *mutex, void *);
+int pthread_mutex_lock(pthread_mutex_t *mutex);
+int pthread_mutex_unlock(pthread_mutex_t *mutex);
+int pthread_mutex_destroy(pthread_mutex_t *mutex);
+
+#endif /* PTHREAD_H */
--- /dev/null
+/** **************************************************************************
+ * select.h
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+// This file is used only on a MingW build, and converts an include of
+// sys/select.h to its Windows equivalent
+
+#include <winsock2.h>
--- /dev/null
+/** **************************************************************************
+ * utsname.h
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+// This file is used only on a MingW build, and provides an implementation
+// of POSIX sys/utsname.h
+
+#ifndef UTSNAME_H
+#define UTSNAME_H
+
+struct utsname
+{
+ const char *sysname;
+ const char *machine;
+};
+
+int uname(struct utsname *);
+
+#endif /* UTSNAME_H */
--- /dev/null
+/** **************************************************************************
+ * request.h
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#ifndef REQUEST_H
+#define REQUEST_H
+
+#include "libs3.h"
+#include "error_parser.h"
+#include "response_headers_handler.h"
+#include "util.h"
+
+// Describes a type of HTTP request (these are our supported HTTP "verbs")
+typedef enum
+{
+ HttpRequestTypeGET,
+ HttpRequestTypeHEAD,
+ HttpRequestTypePUT,
+ HttpRequestTypeCOPY,
+ HttpRequestTypeDELETE
+} HttpRequestType;
+
+
+// This completely describes a request. A RequestParams is not required to be
+// allocated from the heap and its lifetime is not assumed to extend beyond
+// the lifetime of the function to which it has been passed.
+typedef struct RequestParams
+{
+ // Request type, affects the HTTP verb used
+ HttpRequestType httpRequestType;
+
+ // Bucket context for request
+ S3BucketContext bucketContext;
+
+ // Key, if any
+ const char *key;
+
+ // Query params - ready to append to URI (i.e. ?p1=v1?p2=v2)
+ const char *queryParams;
+
+ // sub resource, like ?acl, ?location, ?torrent, ?logging
+ const char *subResource;
+
+ // If this is a copy operation, this gives the source bucket
+ const char *copySourceBucketName;
+
+ // If this is a copy operation, this gives the source key
+ const char *copySourceKey;
+
+ // Get conditions
+ const S3GetConditions *getConditions;
+
+ // Start byte
+ uint64_t startByte;
+
+ // Byte count
+ uint64_t byteCount;
+
+ // Put properties
+ const S3PutProperties *putProperties;
+
+ // Callback to be made when headers are available. Might not be called.
+ S3ResponsePropertiesCallback *propertiesCallback;
+
+ // Callback to be made to supply data to send to S3. Might not be called.
+ S3PutObjectDataCallback *toS3Callback;
+
+ // Number of bytes total that readCallback will supply
+ int64_t toS3CallbackTotalSize;
+
+ // Callback to be made that supplies data read from S3.
+ // Might not be called.
+ S3GetObjectDataCallback *fromS3Callback;
+
+ // Callback to be made when request is complete. This will *always* be
+ // called.
+ S3ResponseCompleteCallback *completeCallback;
+
+ // Data passed to the callbacks
+ void *callbackData;
+} RequestParams;
+
+
+// This is the stuff associated with a request that needs to be on the heap
+// (and thus live while a curl_multi is in use).
+typedef struct Request
+{
+ // These put the request on a doubly-linked list of requests in a
+ // request context, *if* the request is in a request context (else these
+ // will both be 0)
+ struct Request *prev, *next;
+
+ // The status of this Request, as will be reported to the user via the
+ // complete callback
+ S3Status status;
+
+ // The HTTP code returned by the S3 server, if it is known. Would rather
+ // not have to keep track of this but S3 doesn't always indicate its
+ // errors the same way
+ int httpResponseCode;
+
+ // The HTTP headers to use for the curl request
+ struct curl_slist *headers;
+
+ // The CURL structure driving the request
+ CURL *curl;
+
+ // libcurl requires that the uri be stored outside of the curl handle
+ char uri[MAX_URI_SIZE + 1];
+
+ // Callback to be made when headers are available. Might not be called.
+ S3ResponsePropertiesCallback *propertiesCallback;
+
+ // Callback to be made to supply data to send to S3. Might not be called.
+ S3PutObjectDataCallback *toS3Callback;
+
+ // Number of bytes total that readCallback has left to supply
+ int64_t toS3CallbackBytesRemaining;
+
+ // Callback to be made that supplies data read from S3.
+ // Might not be called.
+ S3GetObjectDataCallback *fromS3Callback;
+
+ // Callback to be made when request is complete. This will *always* be
+ // called.
+ S3ResponseCompleteCallback *completeCallback;
+
+ // Data passed to the callbacks
+ void *callbackData;
+
+ // Handler of response headers
+ ResponseHeadersHandler responseHeadersHandler;
+
+ // This is set to nonzero after the properties callback has been made
+ int propertiesCallbackMade;
+
+ // Parser of errors
+ ErrorParser errorParser;
+} Request;
+
+
+// Request functions
+// ----------------------------------------------------------------------------
+
+// Initialize the API
+S3Status request_api_initialize(const char *userAgentInfo, int flags);
+
+// Deinitialize the API
+void request_api_deinitialize();
+
+// Perform a request; if context is 0, performs the request immediately;
+// otherwise, sets it up to be performed by context.
+void request_perform(const RequestParams *params, S3RequestContext *context);
+
+// Called by the internal request code or internal request context code when a
+// curl has finished the request
+void request_finish(Request *request);
+
+// Convert a CURLE code to an S3Status
+S3Status request_curl_code_to_status(CURLcode code);
+
+
+#endif /* REQUEST_H */
--- /dev/null
+/** **************************************************************************
+ * request_context.h
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#ifndef REQUEST_CONTEXT_H
+#define REQUEST_CONTEXT_H
+
+#include "libs3.h"
+
+struct S3RequestContext
+{
+ CURLM *curlm;
+
+ struct Request *requests;
+};
+
+
+#endif /* REQUEST_CONTEXT_H */
--- /dev/null
+/** **************************************************************************
+ * response_headers_handler.h
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#ifndef RESPONSE_HEADERS_HANDLER_H
+#define RESPONSE_HEADERS_HANDLER_H
+
+#include "libs3.h"
+#include "string_buffer.h"
+#include "util.h"
+
+
+typedef struct ResponseHeadersHandler
+{
+ // The structure to pass to the headers callback. This is filled in by
+ // the ResponseHeadersHandler from the headers added to it.
+ S3ResponseProperties responseProperties;
+
+ // Set to 1 after the done call has been made
+ int done;
+
+ // copied into here. We allow 128 bytes for each header, plus \0 term.
+ string_multibuffer(responsePropertyStrings, 5 * 129);
+
+ // responseproperties.metaHeaders strings get copied into here
+ string_multibuffer(responseMetaDataStrings,
+ COMPACTED_METADATA_BUFFER_SIZE);
+
+ // Response meta data
+ S3NameValue responseMetaData[S3_MAX_METADATA_COUNT];
+} ResponseHeadersHandler;
+
+
+void response_headers_handler_initialize(ResponseHeadersHandler *handler);
+
+void response_headers_handler_add(ResponseHeadersHandler *handler,
+ char *data, int dataLen);
+
+void response_headers_handler_done(ResponseHeadersHandler *handler,
+ CURL *curl);
+
+#endif /* RESPONSE_HEADERS_HANDLER_H */
--- /dev/null
+/** **************************************************************************
+ * simplexml.h
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#ifndef SIMPLEXML_H
+#define SIMPLEXML_H
+
+#include "libs3.h"
+
+
+// Simple XML callback.
+//
+// elementPath: is the full "path" of the element; i.e.
+// <foo><bar><baz>data</baz></bar></foo> would have 'data' in the element
+// foo/bar/baz.
+//
+// Return of anything other than S3StatusOK causes the calling
+// simplexml_add() function to immediately stop and return the status.
+//
+// data is passed in as 0 on end of element
+typedef S3Status (SimpleXmlCallback)(const char *elementPath, const char *data,
+ int dataLen, void *callbackData);
+
+typedef struct SimpleXml
+{
+ void *xmlParser;
+
+ SimpleXmlCallback *callback;
+
+ void *callbackData;
+
+ char elementPath[512];
+
+ int elementPathLen;
+
+ S3Status status;
+} SimpleXml;
+
+
+// Simple XML parsing
+// ----------------------------------------------------------------------------
+
+// Always call this, even if the simplexml doesn't end up being used
+void simplexml_initialize(SimpleXml *simpleXml, SimpleXmlCallback *callback,
+ void *callbackData);
+
+S3Status simplexml_add(SimpleXml *simpleXml, const char *data, int dataLen);
+
+
+// Always call this
+void simplexml_deinitialize(SimpleXml *simpleXml);
+
+
+#endif /* SIMPLEXML_H */
--- /dev/null
+/** **************************************************************************
+ * string_buffer.h
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#ifndef STRING_BUFFER_H
+#define STRING_BUFFER_H
+
+#include <stdio.h>
+
+
+// Declare a string_buffer with the given name of the given maximum length
+#define string_buffer(name, len) \
+ char name[len + 1]; \
+ int name##Len
+
+
+// Initialize a string_buffer
+#define string_buffer_initialize(sb) \
+ do { \
+ sb[0] = 0; \
+ sb##Len = 0; \
+ } while (0)
+
+
+// Append [len] bytes of [str] to [sb], setting [all_fit] to 1 if it fit, and
+// 0 if it did not
+#define string_buffer_append(sb, str, len, all_fit) \
+ do { \
+ sb##Len += snprintf(&(sb[sb##Len]), sizeof(sb) - sb##Len - 1, \
+ "%.*s", (int) (len), str); \
+ if (sb##Len > (int) (sizeof(sb) - 1)) { \
+ sb##Len = sizeof(sb) - 1; \
+ all_fit = 0; \
+ } \
+ else { \
+ all_fit = 1; \
+ } \
+ } while (0)
+
+
+// Declare a string multibuffer with the given name of the given maximum size
+#define string_multibuffer(name, size) \
+ char name[size]; \
+ int name##Size
+
+
+// Initialize a string_multibuffer
+#define string_multibuffer_initialize(smb) \
+ do { \
+ smb##Size = 0; \
+ } while (0)
+
+
+// Evaluates to the current string within the string_multibuffer
+#define string_multibuffer_current(smb) \
+ &(smb[smb##Size])
+
+
+// Adds a new string to the string_multibuffer
+#define string_multibuffer_add(smb, str, len, all_fit) \
+ do { \
+ smb##Size += (snprintf(&(smb[smb##Size]), \
+ sizeof(smb) - smb##Size, \
+ "%.*s", (int) (len), str) + 1); \
+ if (smb##Size > (int) sizeof(smb)) { \
+ smb##Size = sizeof(smb); \
+ all_fit = 0; \
+ } \
+ else { \
+ all_fit = 1; \
+ } \
+ } while (0)
+
+
+// Appends to the current string in the string_multibuffer. There must be a
+// current string, meaning that string_multibuffer_add must have been called
+// at least once for this string_multibuffer.
+#define string_multibuffer_append(smb, str, len, all_fit) \
+ do { \
+ smb##Size--; \
+ string_multibuffer_add(smb, str, len, all_fit); \
+ } while (0)
+
+
+#endif /* STRING_BUFFER_H */
--- /dev/null
+/** **************************************************************************
+ * util.h
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#ifndef UTIL_H
+#define UTIL_H
+
+#include <curl/curl.h>
+#include <curl/multi.h>
+#include <stdint.h>
+#include "libs3.h"
+
+
+// Derived from S3 documentation
+
+// This is the maximum number of bytes needed in a "compacted meta header"
+// buffer, which is a buffer storing all of the compacted meta headers.
+#define COMPACTED_METADATA_BUFFER_SIZE \
+ (S3_MAX_METADATA_COUNT * sizeof(S3_METADATA_HEADER_NAME_PREFIX "n: v"))
+
+// Maximum url encoded key size; since every single character could require
+// URL encoding, it's 3 times the size of a key (since each url encoded
+// character takes 3 characters: %NN)
+#define MAX_URLENCODED_KEY_SIZE (3 * S3_MAX_KEY_SIZE)
+
+// This is the maximum size of a URI that could be passed to S3:
+// https://s3.amazonaws.com/${BUCKET}/${KEY}?acl
+// 255 is the maximum bucket length
+#define MAX_URI_SIZE \
+ ((sizeof("https://" S3_HOSTNAME "/") - 1) + 255 + 1 + \
+ MAX_URLENCODED_KEY_SIZE + (sizeof("?torrent" - 1)) + 1)
+
+// Maximum size of a canonicalized resource
+#define MAX_CANONICALIZED_RESOURCE_SIZE \
+ (1 + 255 + 1 + MAX_URLENCODED_KEY_SIZE + (sizeof("?torrent") - 1) + 1)
+
+
+// Utilities -----------------------------------------------------------------
+
+// URL-encodes a string from [src] into [dest]. [dest] must have at least
+// 3x the number of characters that [source] has. At most [maxSrcSize] bytes
+// from [src] are encoded; if more are present in [src], 0 is returned from
+// urlEncode, else nonzero is returned.
+int urlEncode(char *dest, const char *src, int maxSrcSize);
+
+// Returns < 0 on failure >= 0 on success
+int64_t parseIso8601Time(const char *str);
+
+uint64_t parseUnsignedInt(const char *str);
+
+// base64 encode bytes. The output buffer must have at least
+// ((4 * (inLen + 1)) / 3) bytes in it. Returns the number of bytes written
+// to [out].
+int base64Encode(const unsigned char *in, int inLen, char *out);
+
+// Compute HMAC-SHA-1 with key [key] and message [message], storing result
+// in [hmac]
+void HMAC_SHA1(unsigned char hmac[20], const unsigned char *key, int key_len,
+ const unsigned char *message, int message_len);
+
+// Compute a 64-bit hash values given a set of bytes
+uint64_t hash(const unsigned char *k, int length);
+
+// Because Windows seems to be missing isblank(), use our own; it's a very
+// easy function to write in any case
+int is_blank(char c);
+
+#endif /* UTIL_H */
--- /dev/null
+Summary: C Library and Tools for Amazon S3 Access
+Name: libs3
+Version: 1.4
+Release: 1
+License: GPL
+Group: Networking/Utilities
+URL: http://sourceforge.net/projects/reallibs3
+Source0: libs3-1.4.tar.gz
+Buildroot: %{_tmppath}/%{name}-%{version}-%{release}-root
+# Want to include curl dependencies, but older Fedora Core uses curl-devel,
+# and newer Fedora Core uses libcurl-devel ... have to figure out how to
+# handle this problem, but for now, just don't check for any curl libraries
+# Buildrequires: curl-devel
+Buildrequires: libxml2-devel
+Buildrequires: openssl-devel
+Buildrequires: make
+# Requires: libcurl
+Requires: libxml2
+Requires: openssl
+
+%define debug_package %{nil}
+
+%description
+This package includes the libs3 shared object library, needed to run
+applications compiled against libs3, and additionally contains the s3
+utility for accessing Amazon S3.
+
+%package devel
+Summary: Headers and documentation for libs3
+Group: Development/Libraries
+Requires: %{name} = %{version}-%{release}
+
+%description devel
+This library provides an API for using Amazon's S3 service (see
+http://s3.amazonaws.com). Its design goals are:
+
+ - To provide a simple and straightforward API for accessing all of S3's
+ functionality
+ - To not require the developer using libs3 to need to know anything about:
+ - HTTP
+ - XML
+ - SSL
+ In other words, this API is meant to stand on its own, without requiring
+ any implicit knowledge of how S3 services are accessed using HTTP
+ protocols.
+ - To be usable from multithreaded code
+ - To be usable by code which wants to process multiple S3 requests
+ simultaneously from a single thread
+ - To be usable in the simple, straightforward way using sequentialized
+ blocking requests
+
+
+%prep
+%setup -q
+
+%build
+BUILD=$RPM_BUILD_ROOT/build make exported
+
+%install
+BUILD=$RPM_BUILD_ROOT/build DESTDIR=$RPM_BUILD_ROOT/usr make install
+rm -rf $RPM_BUILD_ROOT/build
+
+%clean
+rm -rf $RPM_BUILD_ROOT
+
+%files
+%defattr(-,root,root,-)
+/usr/bin/s3
+/usr/lib/libs3.so*
+
+%files devel
+%defattr(-,root,root,-)
+/usr/include/libs3.h
+/usr/lib/libs3.a
+
+%changelog
+* Sat Aug 09 2008 <bryan@ischo,com> Bryan Ischo
+- Split into regular and devel packages.
+
+* Tue Aug 05 2008 <bryan@ischo,com> Bryan Ischo
+- Initial build.
--- /dev/null
+EXPORTS
+S3_convert_acl
+S3_copy_object
+S3_create_bucket
+S3_create_request_context
+S3_deinitialize
+S3_delete_bucket
+S3_delete_object
+S3_destroy_request_context
+S3_generate_authenticated_query_string
+S3_get_acl
+S3_get_object
+S3_get_request_context_fdsets
+S3_get_server_access_logging
+S3_get_status_name
+S3_head_object
+S3_initialize
+S3_list_bucket
+S3_list_service
+S3_put_object
+S3_runall_request_context
+S3_runonce_request_context
+S3_set_acl
+S3_set_server_access_logging
+S3_status_is_retryable
+S3_test_bucket
+S3_validate_bucket_name
--- /dev/null
+@echo off
+
+if exist "%1". (
+ rmdir /S /Q "%1"
+)
+
+if exist "%1". (
+ del /Q "%1"
+)
--- /dev/null
+/** **************************************************************************
+ * acl.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#include <stdlib.h>
+#include <string.h>
+#include "libs3.h"
+#include "request.h"
+
+// Use a rather arbitrary max size for the document of 64K
+#define ACL_XML_DOC_MAXSIZE (64 * 1024)
+
+
+// get acl -------------------------------------------------------------------
+
+typedef struct GetAclData
+{
+ SimpleXml simpleXml;
+
+ S3ResponsePropertiesCallback *responsePropertiesCallback;
+ S3ResponseCompleteCallback *responseCompleteCallback;
+ void *callbackData;
+
+ int *aclGrantCountReturn;
+ S3AclGrant *aclGrants;
+ char *ownerId;
+ char *ownerDisplayName;
+ string_buffer(aclXmlDocument, ACL_XML_DOC_MAXSIZE);
+} GetAclData;
+
+
+static S3Status getAclPropertiesCallback
+ (const S3ResponseProperties *responseProperties, void *callbackData)
+{
+ GetAclData *gaData = (GetAclData *) callbackData;
+
+ return (*(gaData->responsePropertiesCallback))
+ (responseProperties, gaData->callbackData);
+}
+
+
+static S3Status getAclDataCallback(int bufferSize, const char *buffer,
+ void *callbackData)
+{
+ GetAclData *gaData = (GetAclData *) callbackData;
+
+ int fit;
+
+ string_buffer_append(gaData->aclXmlDocument, buffer, bufferSize, fit);
+
+ return fit ? S3StatusOK : S3StatusXmlDocumentTooLarge;
+}
+
+
+static void getAclCompleteCallback(S3Status requestStatus,
+ const S3ErrorDetails *s3ErrorDetails,
+ void *callbackData)
+{
+ GetAclData *gaData = (GetAclData *) callbackData;
+
+ if (requestStatus == S3StatusOK) {
+ // Parse the document
+ requestStatus = S3_convert_acl
+ (gaData->aclXmlDocument, gaData->ownerId, gaData->ownerDisplayName,
+ gaData->aclGrantCountReturn, gaData->aclGrants);
+ }
+
+ (*(gaData->responseCompleteCallback))
+ (requestStatus, s3ErrorDetails, gaData->callbackData);
+
+ free(gaData);
+}
+
+
+void S3_get_acl(const S3BucketContext *bucketContext, const char *key,
+ char *ownerId, char *ownerDisplayName,
+ int *aclGrantCountReturn, S3AclGrant *aclGrants,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData)
+{
+ // Create the callback data
+ GetAclData *gaData = (GetAclData *) malloc(sizeof(GetAclData));
+ if (!gaData) {
+ (*(handler->completeCallback))(S3StatusOutOfMemory, 0, callbackData);
+ return;
+ }
+
+ gaData->responsePropertiesCallback = handler->propertiesCallback;
+ gaData->responseCompleteCallback = handler->completeCallback;
+ gaData->callbackData = callbackData;
+
+ gaData->aclGrantCountReturn = aclGrantCountReturn;
+ gaData->aclGrants = aclGrants;
+ gaData->ownerId = ownerId;
+ gaData->ownerDisplayName = ownerDisplayName;
+ string_buffer_initialize(gaData->aclXmlDocument);
+ *aclGrantCountReturn = 0;
+
+ // Set up the RequestParams
+ RequestParams params =
+ {
+ HttpRequestTypeGET, // httpRequestType
+ { bucketContext->bucketName, // bucketName
+ bucketContext->protocol, // protocol
+ bucketContext->uriStyle, // uriStyle
+ bucketContext->accessKeyId, // accessKeyId
+ bucketContext->secretAccessKey }, // secretAccessKey
+ key, // key
+ 0, // queryParams
+ "acl", // subResource
+ 0, // copySourceBucketName
+ 0, // copySourceKey
+ 0, // getConditions
+ 0, // startByte
+ 0, // byteCount
+ 0, // putProperties
+ &getAclPropertiesCallback, // propertiesCallback
+ 0, // toS3Callback
+ 0, // toS3CallbackTotalSize
+ &getAclDataCallback, // fromS3Callback
+ &getAclCompleteCallback, // completeCallback
+ gaData // callbackData
+ };
+
+ // Perform the request
+ request_perform(¶ms, requestContext);
+}
+
+
+// set acl -------------------------------------------------------------------
+
+static S3Status generateAclXmlDocument(const char *ownerId,
+ const char *ownerDisplayName,
+ int aclGrantCount,
+ const S3AclGrant *aclGrants,
+ int *xmlDocumentLenReturn,
+ char *xmlDocument,
+ int xmlDocumentBufferSize)
+{
+ *xmlDocumentLenReturn = 0;
+
+#define append(fmt, ...) \
+ do { \
+ *xmlDocumentLenReturn += snprintf \
+ (&(xmlDocument[*xmlDocumentLenReturn]), \
+ xmlDocumentBufferSize - *xmlDocumentLenReturn - 1, \
+ fmt, __VA_ARGS__); \
+ if (*xmlDocumentLenReturn >= xmlDocumentBufferSize) { \
+ return S3StatusXmlDocumentTooLarge; \
+ } \
+ } while (0)
+
+ append("<AccessControlPolicy><Owner><ID>%s</ID><DisplayName>%s"
+ "</DisplayName></Owner><AccessControlList>", ownerId,
+ ownerDisplayName);
+
+ int i;
+ for (i = 0; i < aclGrantCount; i++) {
+ append("%s", "<Grant><Grantee xmlns:xsi=\"http://www.w3.org/2001/"
+ "XMLSchema-instance\" xsi:type=\"");
+ const S3AclGrant *grant = &(aclGrants[i]);
+ switch (grant->granteeType) {
+ case S3GranteeTypeAmazonCustomerByEmail:
+ append("AmazonCustomerByEmail\"><EmailAddress>%s</EmailAddress>",
+ grant->grantee.amazonCustomerByEmail.emailAddress);
+ break;
+ case S3GranteeTypeCanonicalUser:
+ append("CanonicalUser\"><ID>%s</ID><DisplayName>%s</DisplayName>",
+ grant->grantee.canonicalUser.id,
+ grant->grantee.canonicalUser.displayName);
+ break;
+ default: { // case S3GranteeTypeAllAwsUsers/S3GranteeTypeAllUsers:
+ const char *grantee;
+ switch (grant->granteeType) {
+ case S3GranteeTypeAllAwsUsers:
+ grantee = "http://acs.amazonaws.com/groups/global/"
+ "AuthenticatedUsers";
+ break;
+ case S3GranteeTypeAllUsers:
+ grantee = "http://acs.amazonaws.com/groups/global/"
+ "AllUsers";
+ break;
+ default:
+ grantee = "http://acs.amazonaws.com/groups/s3/"
+ "LogDelivery";
+ break;
+ }
+ append("Group\"><URI>%s</URI>", grantee);
+ }
+ break;
+ }
+ append("</Grantee><Permission>%s</Permission></Grant>",
+ ((grant->permission == S3PermissionRead) ? "READ" :
+ (grant->permission == S3PermissionWrite) ? "WRITE" :
+ (grant->permission == S3PermissionReadACP) ? "READ_ACP" :
+ (grant->permission == S3PermissionWriteACP) ? "WRITE_ACP" :
+ "FULL_CONTROL"));
+ }
+
+ append("%s", "</AccessControlList></AccessControlPolicy>");
+
+ return S3StatusOK;
+}
+
+
+typedef struct SetAclData
+{
+ S3ResponsePropertiesCallback *responsePropertiesCallback;
+ S3ResponseCompleteCallback *responseCompleteCallback;
+ void *callbackData;
+
+ int aclXmlDocumentLen;
+ char aclXmlDocument[ACL_XML_DOC_MAXSIZE];
+ int aclXmlDocumentBytesWritten;
+
+} SetAclData;
+
+
+static S3Status setAclPropertiesCallback
+ (const S3ResponseProperties *responseProperties, void *callbackData)
+{
+ SetAclData *paData = (SetAclData *) callbackData;
+
+ return (*(paData->responsePropertiesCallback))
+ (responseProperties, paData->callbackData);
+}
+
+
+static int setAclDataCallback(int bufferSize, char *buffer, void *callbackData)
+{
+ SetAclData *paData = (SetAclData *) callbackData;
+
+ int remaining = (paData->aclXmlDocumentLen -
+ paData->aclXmlDocumentBytesWritten);
+
+ int toCopy = bufferSize > remaining ? remaining : bufferSize;
+
+ if (!toCopy) {
+ return 0;
+ }
+
+ memcpy(buffer, &(paData->aclXmlDocument
+ [paData->aclXmlDocumentBytesWritten]), toCopy);
+
+ paData->aclXmlDocumentBytesWritten += toCopy;
+
+ return toCopy;
+}
+
+
+static void setAclCompleteCallback(S3Status requestStatus,
+ const S3ErrorDetails *s3ErrorDetails,
+ void *callbackData)
+{
+ SetAclData *paData = (SetAclData *) callbackData;
+
+ (*(paData->responseCompleteCallback))
+ (requestStatus, s3ErrorDetails, paData->callbackData);
+
+ free(paData);
+}
+
+
+void S3_set_acl(const S3BucketContext *bucketContext, const char *key,
+ const char *ownerId, const char *ownerDisplayName,
+ int aclGrantCount, const S3AclGrant *aclGrants,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData)
+{
+ if (aclGrantCount > S3_MAX_ACL_GRANT_COUNT) {
+ (*(handler->completeCallback))
+ (S3StatusTooManyGrants, 0, callbackData);
+ return;
+ }
+
+ SetAclData *data = (SetAclData *) malloc(sizeof(SetAclData));
+ if (!data) {
+ (*(handler->completeCallback))(S3StatusOutOfMemory, 0, callbackData);
+ return;
+ }
+
+ // Convert aclGrants to XML document
+ S3Status status = generateAclXmlDocument
+ (ownerId, ownerDisplayName, aclGrantCount, aclGrants,
+ &(data->aclXmlDocumentLen), data->aclXmlDocument,
+ sizeof(data->aclXmlDocument));
+ if (status != S3StatusOK) {
+ free(data);
+ (*(handler->completeCallback))(status, 0, callbackData);
+ return;
+ }
+
+ data->responsePropertiesCallback = handler->propertiesCallback;
+ data->responseCompleteCallback = handler->completeCallback;
+ data->callbackData = callbackData;
+
+ data->aclXmlDocumentBytesWritten = 0;
+
+ // Set up the RequestParams
+ RequestParams params =
+ {
+ HttpRequestTypePUT, // httpRequestType
+ { bucketContext->bucketName, // bucketName
+ bucketContext->protocol, // protocol
+ bucketContext->uriStyle, // uriStyle
+ bucketContext->accessKeyId, // accessKeyId
+ bucketContext->secretAccessKey }, // secretAccessKey
+ key, // key
+ 0, // queryParams
+ "acl", // subResource
+ 0, // copySourceBucketName
+ 0, // copySourceKey
+ 0, // getConditions
+ 0, // startByte
+ 0, // byteCount
+ 0, // putProperties
+ &setAclPropertiesCallback, // propertiesCallback
+ &setAclDataCallback, // toS3Callback
+ data->aclXmlDocumentLen, // toS3CallbackTotalSize
+ 0, // fromS3Callback
+ &setAclCompleteCallback, // completeCallback
+ data // callbackData
+ };
+
+ // Perform the request
+ request_perform(¶ms, requestContext);
+}
--- /dev/null
+/** **************************************************************************
+ * bucket.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#include <string.h>
+#include <stdlib.h>
+#include "libs3.h"
+#include "request.h"
+#include "simplexml.h"
+
+// test bucket ---------------------------------------------------------------
+
+typedef struct TestBucketData
+{
+ SimpleXml simpleXml;
+
+ S3ResponsePropertiesCallback *responsePropertiesCallback;
+ S3ResponseCompleteCallback *responseCompleteCallback;
+ void *callbackData;
+
+ int locationConstraintReturnSize;
+ char *locationConstraintReturn;
+
+ string_buffer(locationConstraint, 256);
+} TestBucketData;
+
+
+static S3Status testBucketXmlCallback(const char *elementPath,
+ const char *data, int dataLen,
+ void *callbackData)
+{
+ TestBucketData *tbData = (TestBucketData *) callbackData;
+
+ int fit;
+
+ if (data && !strcmp(elementPath, "LocationConstraint")) {
+ string_buffer_append(tbData->locationConstraint, data, dataLen, fit);
+ }
+
+ return S3StatusOK;
+}
+
+
+static S3Status testBucketPropertiesCallback
+ (const S3ResponseProperties *responseProperties, void *callbackData)
+{
+ TestBucketData *tbData = (TestBucketData *) callbackData;
+
+ return (*(tbData->responsePropertiesCallback))
+ (responseProperties, tbData->callbackData);
+}
+
+
+static S3Status testBucketDataCallback(int bufferSize, const char *buffer,
+ void *callbackData)
+{
+ TestBucketData *tbData = (TestBucketData *) callbackData;
+
+ return simplexml_add(&(tbData->simpleXml), buffer, bufferSize);
+}
+
+
+static void testBucketCompleteCallback(S3Status requestStatus,
+ const S3ErrorDetails *s3ErrorDetails,
+ void *callbackData)
+{
+ TestBucketData *tbData = (TestBucketData *) callbackData;
+
+ // Copy the location constraint into the return buffer
+ snprintf(tbData->locationConstraintReturn,
+ tbData->locationConstraintReturnSize, "%s",
+ tbData->locationConstraint);
+
+ (*(tbData->responseCompleteCallback))
+ (requestStatus, s3ErrorDetails, tbData->callbackData);
+
+ simplexml_deinitialize(&(tbData->simpleXml));
+
+ free(tbData);
+}
+
+
+void S3_test_bucket(S3Protocol protocol, S3UriStyle uriStyle,
+ const char *accessKeyId, const char *secretAccessKey,
+ const char *bucketName, int locationConstraintReturnSize,
+ char *locationConstraintReturn,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData)
+{
+ // Create the callback data
+ TestBucketData *tbData =
+ (TestBucketData *) malloc(sizeof(TestBucketData));
+ if (!tbData) {
+ (*(handler->completeCallback))(S3StatusOutOfMemory, 0, callbackData);
+ return;
+ }
+
+ simplexml_initialize(&(tbData->simpleXml), &testBucketXmlCallback, tbData);
+
+ tbData->responsePropertiesCallback = handler->propertiesCallback;
+ tbData->responseCompleteCallback = handler->completeCallback;
+ tbData->callbackData = callbackData;
+
+ tbData->locationConstraintReturnSize = locationConstraintReturnSize;
+ tbData->locationConstraintReturn = locationConstraintReturn;
+ string_buffer_initialize(tbData->locationConstraint);
+
+ // Set up the RequestParams
+ RequestParams params =
+ {
+ HttpRequestTypeGET, // httpRequestType
+ { bucketName, // bucketName
+ protocol, // protocol
+ uriStyle, // uriStyle
+ accessKeyId, // accessKeyId
+ secretAccessKey }, // secretAccessKey
+ 0, // key
+ 0, // queryParams
+ "location", // subResource
+ 0, // copySourceBucketName
+ 0, // copySourceKey
+ 0, // getConditions
+ 0, // startByte
+ 0, // byteCount
+ 0, // putProperties
+ &testBucketPropertiesCallback, // propertiesCallback
+ 0, // toS3Callback
+ 0, // toS3CallbackTotalSize
+ &testBucketDataCallback, // fromS3Callback
+ &testBucketCompleteCallback, // completeCallback
+ tbData // callbackData
+ };
+
+ // Perform the request
+ request_perform(¶ms, requestContext);
+}
+
+
+// create bucket -------------------------------------------------------------
+
+typedef struct CreateBucketData
+{
+ S3ResponsePropertiesCallback *responsePropertiesCallback;
+ S3ResponseCompleteCallback *responseCompleteCallback;
+ void *callbackData;
+
+ char doc[1024];
+ int docLen, docBytesWritten;
+} CreateBucketData;
+
+
+static S3Status createBucketPropertiesCallback
+ (const S3ResponseProperties *responseProperties, void *callbackData)
+{
+ CreateBucketData *cbData = (CreateBucketData *) callbackData;
+
+ return (*(cbData->responsePropertiesCallback))
+ (responseProperties, cbData->callbackData);
+}
+
+
+static int createBucketDataCallback(int bufferSize, char *buffer,
+ void *callbackData)
+{
+ CreateBucketData *cbData = (CreateBucketData *) callbackData;
+
+ if (!cbData->docLen) {
+ return 0;
+ }
+
+ int remaining = (cbData->docLen - cbData->docBytesWritten);
+
+ int toCopy = bufferSize > remaining ? remaining : bufferSize;
+
+ if (!toCopy) {
+ return 0;
+ }
+
+ memcpy(buffer, &(cbData->doc[cbData->docBytesWritten]), toCopy);
+
+ cbData->docBytesWritten += toCopy;
+
+ return toCopy;
+}
+
+
+static void createBucketCompleteCallback(S3Status requestStatus,
+ const S3ErrorDetails *s3ErrorDetails,
+ void *callbackData)
+{
+ CreateBucketData *cbData = (CreateBucketData *) callbackData;
+
+ (*(cbData->responseCompleteCallback))
+ (requestStatus, s3ErrorDetails, cbData->callbackData);
+
+ free(cbData);
+}
+
+
+void S3_create_bucket(S3Protocol protocol, const char *accessKeyId,
+ const char *secretAccessKey, const char *bucketName,
+ S3CannedAcl cannedAcl, const char *locationConstraint,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData)
+{
+ // Create the callback data
+ CreateBucketData *cbData =
+ (CreateBucketData *) malloc(sizeof(CreateBucketData));
+ if (!cbData) {
+ (*(handler->completeCallback))(S3StatusOutOfMemory, 0, callbackData);
+ return;
+ }
+
+ cbData->responsePropertiesCallback = handler->propertiesCallback;
+ cbData->responseCompleteCallback = handler->completeCallback;
+ cbData->callbackData = callbackData;
+
+ if (locationConstraint) {
+ cbData->docLen =
+ snprintf(cbData->doc, sizeof(cbData->doc),
+ "<CreateBucketConfiguration><LocationConstraint>"
+ "%s</LocationConstraint></CreateBucketConfiguration>",
+ locationConstraint);
+ cbData->docBytesWritten = 0;
+ }
+ else {
+ cbData->docLen = 0;
+ }
+
+ // Set up S3PutProperties
+ S3PutProperties properties =
+ {
+ 0, // contentType
+ 0, // md5
+ 0, // cacheControl
+ 0, // contentDispositionFilename
+ 0, // contentEncoding
+ 0, // expires
+ cannedAcl, // cannedAcl
+ 0, // metaDataCount
+ 0 // metaData
+ };
+
+ // Set up the RequestParams
+ RequestParams params =
+ {
+ HttpRequestTypePUT, // httpRequestType
+ { bucketName, // bucketName
+ protocol, // protocol
+ S3UriStylePath, // uriStyle
+ accessKeyId, // accessKeyId
+ secretAccessKey }, // secretAccessKey
+ 0, // key
+ 0, // queryParams
+ 0, // subResource
+ 0, // copySourceBucketName
+ 0, // copySourceKey
+ 0, // getConditions
+ 0, // startByte
+ 0, // byteCount
+ &properties, // putProperties
+ &createBucketPropertiesCallback, // propertiesCallback
+ &createBucketDataCallback, // toS3Callback
+ cbData->docLen, // toS3CallbackTotalSize
+ 0, // fromS3Callback
+ &createBucketCompleteCallback, // completeCallback
+ cbData // callbackData
+ };
+
+ // Perform the request
+ request_perform(¶ms, requestContext);
+}
+
+
+// delete bucket -------------------------------------------------------------
+
+typedef struct DeleteBucketData
+{
+ S3ResponsePropertiesCallback *responsePropertiesCallback;
+ S3ResponseCompleteCallback *responseCompleteCallback;
+ void *callbackData;
+} DeleteBucketData;
+
+
+static S3Status deleteBucketPropertiesCallback
+ (const S3ResponseProperties *responseProperties, void *callbackData)
+{
+ DeleteBucketData *dbData = (DeleteBucketData *) callbackData;
+
+ return (*(dbData->responsePropertiesCallback))
+ (responseProperties, dbData->callbackData);
+}
+
+
+static void deleteBucketCompleteCallback(S3Status requestStatus,
+ const S3ErrorDetails *s3ErrorDetails,
+ void *callbackData)
+{
+ DeleteBucketData *dbData = (DeleteBucketData *) callbackData;
+
+ (*(dbData->responseCompleteCallback))
+ (requestStatus, s3ErrorDetails, dbData->callbackData);
+
+ free(dbData);
+}
+
+
+void S3_delete_bucket(S3Protocol protocol, S3UriStyle uriStyle,
+ const char *accessKeyId, const char *secretAccessKey,
+ const char *bucketName,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData)
+{
+ // Create the callback data
+ DeleteBucketData *dbData =
+ (DeleteBucketData *) malloc(sizeof(DeleteBucketData));
+ if (!dbData) {
+ (*(handler->completeCallback))(S3StatusOutOfMemory, 0, callbackData);
+ return;
+ }
+
+ dbData->responsePropertiesCallback = handler->propertiesCallback;
+ dbData->responseCompleteCallback = handler->completeCallback;
+ dbData->callbackData = callbackData;
+
+ // Set up the RequestParams
+ RequestParams params =
+ {
+ HttpRequestTypeDELETE, // httpRequestType
+ { bucketName, // bucketName
+ protocol, // protocol
+ uriStyle, // uriStyle
+ accessKeyId, // accessKeyId
+ secretAccessKey }, // secretAccessKey
+ 0, // key
+ 0, // queryParams
+ 0, // subResource
+ 0, // copySourceBucketName
+ 0, // copySourceKey
+ 0, // getConditions
+ 0, // startByte
+ 0, // byteCount
+ 0, // putProperties
+ &deleteBucketPropertiesCallback, // propertiesCallback
+ 0, // toS3Callback
+ 0, // toS3CallbackTotalSize
+ 0, // fromS3Callback
+ &deleteBucketCompleteCallback, // completeCallback
+ dbData // callbackData
+ };
+
+ // Perform the request
+ request_perform(¶ms, requestContext);
+}
+
+
+// list bucket ----------------------------------------------------------------
+
+typedef struct ListBucketContents
+{
+ string_buffer(key, 1024);
+ string_buffer(lastModified, 256);
+ string_buffer(eTag, 256);
+ string_buffer(size, 24);
+ string_buffer(ownerId, 256);
+ string_buffer(ownerDisplayName, 256);
+} ListBucketContents;
+
+
+static void initialize_list_bucket_contents(ListBucketContents *contents)
+{
+ string_buffer_initialize(contents->key);
+ string_buffer_initialize(contents->lastModified);
+ string_buffer_initialize(contents->eTag);
+ string_buffer_initialize(contents->size);
+ string_buffer_initialize(contents->ownerId);
+ string_buffer_initialize(contents->ownerDisplayName);
+}
+
+// We read up to 32 Contents at a time
+#define MAX_CONTENTS 32
+// We read up to 8 CommonPrefixes at a time
+#define MAX_COMMON_PREFIXES 8
+
+typedef struct ListBucketData
+{
+ SimpleXml simpleXml;
+
+ S3ResponsePropertiesCallback *responsePropertiesCallback;
+ S3ListBucketCallback *listBucketCallback;
+ S3ResponseCompleteCallback *responseCompleteCallback;
+ void *callbackData;
+
+ string_buffer(isTruncated, 64);
+ string_buffer(nextMarker, 1024);
+
+ int contentsCount;
+ ListBucketContents contents[MAX_CONTENTS];
+
+ int commonPrefixesCount;
+ char commonPrefixes[MAX_COMMON_PREFIXES][1024];
+ int commonPrefixLens[MAX_COMMON_PREFIXES];
+} ListBucketData;
+
+
+static void initialize_list_bucket_data(ListBucketData *lbData)
+{
+ lbData->contentsCount = 0;
+ initialize_list_bucket_contents(lbData->contents);
+ lbData->commonPrefixesCount = 0;
+ lbData->commonPrefixes[0][0] = 0;
+ lbData->commonPrefixLens[0] = 0;
+}
+
+
+static S3Status make_list_bucket_callback(ListBucketData *lbData)
+{
+ int i;
+
+ // Convert IsTruncated
+ int isTruncated = (!strcmp(lbData->isTruncated, "true") ||
+ !strcmp(lbData->isTruncated, "1")) ? 1 : 0;
+
+ // Convert the contents
+ S3ListBucketContent contents[lbData->contentsCount];
+
+ int contentsCount = lbData->contentsCount;
+ for (i = 0; i < contentsCount; i++) {
+ S3ListBucketContent *contentDest = &(contents[i]);
+ ListBucketContents *contentSrc = &(lbData->contents[i]);
+ contentDest->key = contentSrc->key;
+ contentDest->lastModified =
+ parseIso8601Time(contentSrc->lastModified);
+ contentDest->eTag = contentSrc->eTag;
+ contentDest->size = parseUnsignedInt(contentSrc->size);
+ contentDest->ownerId =
+ contentSrc->ownerId[0] ?contentSrc->ownerId : 0;
+ contentDest->ownerDisplayName = (contentSrc->ownerDisplayName[0] ?
+ contentSrc->ownerDisplayName : 0);
+ }
+
+ // Make the common prefixes array
+ int commonPrefixesCount = lbData->commonPrefixesCount;
+ char *commonPrefixes[commonPrefixesCount];
+ for (i = 0; i < commonPrefixesCount; i++) {
+ commonPrefixes[i] = lbData->commonPrefixes[i];
+ }
+
+ return (*(lbData->listBucketCallback))
+ (isTruncated, lbData->nextMarker,
+ contentsCount, contents, commonPrefixesCount,
+ (const char **) commonPrefixes, lbData->callbackData);
+}
+
+
+static S3Status listBucketXmlCallback(const char *elementPath,
+ const char *data, int dataLen,
+ void *callbackData)
+{
+ ListBucketData *lbData = (ListBucketData *) callbackData;
+
+ int fit;
+
+ if (data) {
+ if (!strcmp(elementPath, "ListBucketResult/IsTruncated")) {
+ string_buffer_append(lbData->isTruncated, data, dataLen, fit);
+ }
+ else if (!strcmp(elementPath, "ListBucketResult/NextMarker")) {
+ string_buffer_append(lbData->nextMarker, data, dataLen, fit);
+ }
+ else if (!strcmp(elementPath, "ListBucketResult/Contents/Key")) {
+ ListBucketContents *contents =
+ &(lbData->contents[lbData->contentsCount]);
+ string_buffer_append(contents->key, data, dataLen, fit);
+ }
+ else if (!strcmp(elementPath,
+ "ListBucketResult/Contents/LastModified")) {
+ ListBucketContents *contents =
+ &(lbData->contents[lbData->contentsCount]);
+ string_buffer_append(contents->lastModified, data, dataLen, fit);
+ }
+ else if (!strcmp(elementPath, "ListBucketResult/Contents/ETag")) {
+ ListBucketContents *contents =
+ &(lbData->contents[lbData->contentsCount]);
+ string_buffer_append(contents->eTag, data, dataLen, fit);
+ }
+ else if (!strcmp(elementPath, "ListBucketResult/Contents/Size")) {
+ ListBucketContents *contents =
+ &(lbData->contents[lbData->contentsCount]);
+ string_buffer_append(contents->size, data, dataLen, fit);
+ }
+ else if (!strcmp(elementPath, "ListBucketResult/Contents/Owner/ID")) {
+ ListBucketContents *contents =
+ &(lbData->contents[lbData->contentsCount]);
+ string_buffer_append(contents->ownerId, data, dataLen, fit);
+ }
+ else if (!strcmp(elementPath,
+ "ListBucketResult/Contents/Owner/DisplayName")) {
+ ListBucketContents *contents =
+ &(lbData->contents[lbData->contentsCount]);
+ string_buffer_append
+ (contents->ownerDisplayName, data, dataLen, fit);
+ }
+ else if (!strcmp(elementPath,
+ "ListBucketResult/CommonPrefixes/Prefix")) {
+ int which = lbData->commonPrefixesCount;
+ lbData->commonPrefixLens[which] +=
+ snprintf(lbData->commonPrefixes[which],
+ sizeof(lbData->commonPrefixes[which]) -
+ lbData->commonPrefixLens[which] - 1,
+ "%.*s", dataLen, data);
+ if (lbData->commonPrefixLens[which] >=
+ (int) sizeof(lbData->commonPrefixes[which])) {
+ return S3StatusXmlParseFailure;
+ }
+ }
+ }
+ else {
+ if (!strcmp(elementPath, "ListBucketResult/Contents")) {
+ // Finished a Contents
+ lbData->contentsCount++;
+ if (lbData->contentsCount == MAX_CONTENTS) {
+ // Make the callback
+ S3Status status = make_list_bucket_callback(lbData);
+ if (status != S3StatusOK) {
+ return status;
+ }
+ initialize_list_bucket_data(lbData);
+ }
+ else {
+ // Initialize the next one
+ initialize_list_bucket_contents
+ (&(lbData->contents[lbData->contentsCount]));
+ }
+ }
+ else if (!strcmp(elementPath,
+ "ListBucketResult/CommonPrefixes/Prefix")) {
+ // Finished a Prefix
+ lbData->commonPrefixesCount++;
+ if (lbData->commonPrefixesCount == MAX_COMMON_PREFIXES) {
+ // Make the callback
+ S3Status status = make_list_bucket_callback(lbData);
+ if (status != S3StatusOK) {
+ return status;
+ }
+ initialize_list_bucket_data(lbData);
+ }
+ else {
+ // Initialize the next one
+ lbData->commonPrefixes[lbData->commonPrefixesCount][0] = 0;
+ lbData->commonPrefixLens[lbData->commonPrefixesCount] = 0;
+ }
+ }
+ }
+
+ return S3StatusOK;
+}
+
+
+static S3Status listBucketPropertiesCallback
+ (const S3ResponseProperties *responseProperties, void *callbackData)
+{
+ ListBucketData *lbData = (ListBucketData *) callbackData;
+
+ return (*(lbData->responsePropertiesCallback))
+ (responseProperties, lbData->callbackData);
+}
+
+
+static S3Status listBucketDataCallback(int bufferSize, const char *buffer,
+ void *callbackData)
+{
+ ListBucketData *lbData = (ListBucketData *) callbackData;
+
+ return simplexml_add(&(lbData->simpleXml), buffer, bufferSize);
+}
+
+
+static void listBucketCompleteCallback(S3Status requestStatus,
+ const S3ErrorDetails *s3ErrorDetails,
+ void *callbackData)
+{
+ ListBucketData *lbData = (ListBucketData *) callbackData;
+
+ // Make the callback if there is anything
+ if (lbData->contentsCount || lbData->commonPrefixesCount) {
+ make_list_bucket_callback(lbData);
+ }
+
+ (*(lbData->responseCompleteCallback))
+ (requestStatus, s3ErrorDetails, lbData->callbackData);
+
+ simplexml_deinitialize(&(lbData->simpleXml));
+
+ free(lbData);
+}
+
+
+void S3_list_bucket(const S3BucketContext *bucketContext, const char *prefix,
+ const char *marker, const char *delimiter, int maxkeys,
+ S3RequestContext *requestContext,
+ const S3ListBucketHandler *handler, void *callbackData)
+{
+ // Compose the query params
+ string_buffer(queryParams, 4096);
+ string_buffer_initialize(queryParams);
+
+#define safe_append(name, value) \
+ do { \
+ int fit; \
+ if (amp) { \
+ string_buffer_append(queryParams, "&", 1, fit); \
+ if (!fit) { \
+ (*(handler->responseHandler.completeCallback)) \
+ (S3StatusQueryParamsTooLong, 0, callbackData); \
+ return; \
+ } \
+ } \
+ string_buffer_append(queryParams, name "=", \
+ sizeof(name "=") - 1, fit); \
+ if (!fit) { \
+ (*(handler->responseHandler.completeCallback)) \
+ (S3StatusQueryParamsTooLong, 0, callbackData); \
+ return; \
+ } \
+ amp = 1; \
+ char encoded[3 * 1024]; \
+ if (!urlEncode(encoded, value, 1024)) { \
+ (*(handler->responseHandler.completeCallback)) \
+ (S3StatusQueryParamsTooLong, 0, callbackData); \
+ return; \
+ } \
+ string_buffer_append(queryParams, encoded, strlen(encoded), \
+ fit); \
+ if (!fit) { \
+ (*(handler->responseHandler.completeCallback)) \
+ (S3StatusQueryParamsTooLong, 0, callbackData); \
+ return; \
+ } \
+ } while (0)
+
+
+ int amp = 0;
+ if (prefix) {
+ safe_append("prefix", prefix);
+ }
+ if (marker) {
+ safe_append("marker", marker);
+ }
+ if (delimiter) {
+ safe_append("delimiter", delimiter);
+ }
+ if (maxkeys) {
+ char maxKeysString[64];
+ snprintf(maxKeysString, sizeof(maxKeysString), "%d", maxkeys);
+ safe_append("max-keys", maxKeysString);
+ }
+
+ ListBucketData *lbData =
+ (ListBucketData *) malloc(sizeof(ListBucketData));
+
+ if (!lbData) {
+ (*(handler->responseHandler.completeCallback))
+ (S3StatusOutOfMemory, 0, callbackData);
+ return;
+ }
+
+ simplexml_initialize(&(lbData->simpleXml), &listBucketXmlCallback, lbData);
+
+ lbData->responsePropertiesCallback =
+ handler->responseHandler.propertiesCallback;
+ lbData->listBucketCallback = handler->listBucketCallback;
+ lbData->responseCompleteCallback =
+ handler->responseHandler.completeCallback;
+ lbData->callbackData = callbackData;
+
+ string_buffer_initialize(lbData->isTruncated);
+ string_buffer_initialize(lbData->nextMarker);
+ initialize_list_bucket_data(lbData);
+
+ // Set up the RequestParams
+ RequestParams params =
+ {
+ HttpRequestTypeGET, // httpRequestType
+ { bucketContext->bucketName, // bucketName
+ bucketContext->protocol, // protocol
+ bucketContext->uriStyle, // uriStyle
+ bucketContext->accessKeyId, // accessKeyId
+ bucketContext->secretAccessKey }, // secretAccessKey
+ 0, // key
+ queryParams[0] ? queryParams : 0, // queryParams
+ 0, // subResource
+ 0, // copySourceBucketName
+ 0, // copySourceKey
+ 0, // getConditions
+ 0, // startByte
+ 0, // byteCount
+ 0, // putProperties
+ &listBucketPropertiesCallback, // propertiesCallback
+ 0, // toS3Callback
+ 0, // toS3CallbackTotalSize
+ &listBucketDataCallback, // fromS3Callback
+ &listBucketCompleteCallback, // completeCallback
+ lbData // callbackData
+ };
+
+ // Perform the request
+ request_perform(¶ms, requestContext);
+}
--- /dev/null
+/** **************************************************************************
+ * error_parser.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#include <string.h>
+#include "error_parser.h"
+
+
+static S3Status errorXmlCallback(const char *elementPath, const char *data,
+ int dataLen, void *callbackData)
+{
+ // We ignore end of element callbacks because we don't care about them
+ if (!data) {
+ return S3StatusOK;
+ }
+
+ ErrorParser *errorParser = (ErrorParser *) callbackData;
+
+ int fit;
+
+ if (!strcmp(elementPath, "Error")) {
+ // Ignore, this is the Error element itself, we only care about subs
+ }
+ else if (!strcmp(elementPath, "Error/Code")) {
+ string_buffer_append(errorParser->code, data, dataLen, fit);
+ }
+ else if (!strcmp(elementPath, "Error/Message")) {
+ string_buffer_append(errorParser->message, data, dataLen, fit);
+ errorParser->s3ErrorDetails.message = errorParser->message;
+ }
+ else if (!strcmp(elementPath, "Error/Resource")) {
+ string_buffer_append(errorParser->resource, data, dataLen, fit);
+ errorParser->s3ErrorDetails.resource = errorParser->resource;
+ }
+ else if (!strcmp(elementPath, "Error/FurtherDetails")) {
+ string_buffer_append(errorParser->furtherDetails, data, dataLen, fit);
+ errorParser->s3ErrorDetails.furtherDetails =
+ errorParser->furtherDetails;
+ }
+ else {
+ if (strncmp(elementPath, "Error/", sizeof("Error/") - 1)) {
+ // If for some weird reason it's not within the Error element,
+ // ignore it
+ return S3StatusOK;
+ }
+ // It's an unknown error element. See if it matches the most
+ // recent error element.
+ const char *elementName = &(elementPath[sizeof("Error/") - 1]);
+ if (errorParser->s3ErrorDetails.extraDetailsCount &&
+ !strcmp(elementName, errorParser->s3ErrorDetails.extraDetails
+ [errorParser->s3ErrorDetails.extraDetailsCount - 1].name)) {
+ // Append the value
+ string_multibuffer_append(errorParser->extraDetailsNamesValues,
+ data, dataLen, fit);
+ // If it didn't fit, remove this extra
+ if (!fit) {
+ errorParser->s3ErrorDetails.extraDetailsCount--;
+ }
+ return S3StatusOK;
+ }
+ // OK, must add another unknown error element, if it will fit.
+ if (errorParser->s3ErrorDetails.extraDetailsCount ==
+ sizeof(errorParser->extraDetails)) {
+ // Won't fit. Ignore this one.
+ return S3StatusOK;
+ }
+ // Copy in the name and value
+ char *name = string_multibuffer_current
+ (errorParser->extraDetailsNamesValues);
+ int nameLen = strlen(elementName);
+ string_multibuffer_add(errorParser->extraDetailsNamesValues,
+ elementName, nameLen, fit);
+ if (!fit) {
+ // Name didn't fit; ignore this one.
+ return S3StatusOK;
+ }
+ char *value = string_multibuffer_current
+ (errorParser->extraDetailsNamesValues);
+ string_multibuffer_add(errorParser->extraDetailsNamesValues,
+ data, dataLen, fit);
+ if (!fit) {
+ // Value didn't fit; ignore this one.
+ return S3StatusOK;
+ }
+ S3NameValue *nv =
+ &(errorParser->extraDetails
+ [errorParser->s3ErrorDetails.extraDetailsCount++]);
+ nv->name = name;
+ nv->value = value;
+ }
+
+ return S3StatusOK;
+}
+
+
+void error_parser_initialize(ErrorParser *errorParser)
+{
+ errorParser->s3ErrorDetails.message = 0;
+ errorParser->s3ErrorDetails.resource = 0;
+ errorParser->s3ErrorDetails.furtherDetails = 0;
+ errorParser->s3ErrorDetails.extraDetailsCount = 0;
+ errorParser->s3ErrorDetails.extraDetails = errorParser->extraDetails;
+ errorParser->errorXmlParserInitialized = 0;
+ string_buffer_initialize(errorParser->code);
+ string_buffer_initialize(errorParser->message);
+ string_buffer_initialize(errorParser->resource);
+ string_buffer_initialize(errorParser->furtherDetails);
+ string_multibuffer_initialize(errorParser->extraDetailsNamesValues);
+}
+
+
+S3Status error_parser_add(ErrorParser *errorParser, char *buffer,
+ int bufferSize)
+{
+ if (!errorParser->errorXmlParserInitialized) {
+ simplexml_initialize(&(errorParser->errorXmlParser), &errorXmlCallback,
+ errorParser);
+ errorParser->errorXmlParserInitialized = 1;
+ }
+
+ return simplexml_add(&(errorParser->errorXmlParser), buffer, bufferSize);
+}
+
+
+void error_parser_convert_status(ErrorParser *errorParser, S3Status *status)
+{
+ // Convert the error status string into a code
+ if (!errorParser->codeLen) {
+ return;
+ }
+
+#define HANDLE_CODE(name) \
+ do { \
+ if (!strcmp(errorParser->code, #name)) { \
+ *status = S3StatusError##name; \
+ goto code_set; \
+ } \
+ } while (0)
+
+ HANDLE_CODE(AccessDenied);
+ HANDLE_CODE(AccountProblem);
+ HANDLE_CODE(AmbiguousGrantByEmailAddress);
+ HANDLE_CODE(BadDigest);
+ HANDLE_CODE(BucketAlreadyExists);
+ HANDLE_CODE(BucketAlreadyOwnedByYou);
+ HANDLE_CODE(BucketNotEmpty);
+ HANDLE_CODE(CredentialsNotSupported);
+ HANDLE_CODE(CrossLocationLoggingProhibited);
+ HANDLE_CODE(EntityTooSmall);
+ HANDLE_CODE(EntityTooLarge);
+ HANDLE_CODE(ExpiredToken);
+ HANDLE_CODE(IncompleteBody);
+ HANDLE_CODE(IncorrectNumberOfFilesInPostRequest);
+ HANDLE_CODE(InlineDataTooLarge);
+ HANDLE_CODE(InternalError);
+ HANDLE_CODE(InvalidAccessKeyId);
+ HANDLE_CODE(InvalidAddressingHeader);
+ HANDLE_CODE(InvalidArgument);
+ HANDLE_CODE(InvalidBucketName);
+ HANDLE_CODE(InvalidDigest);
+ HANDLE_CODE(InvalidLocationConstraint);
+ HANDLE_CODE(InvalidPayer);
+ HANDLE_CODE(InvalidPolicyDocument);
+ HANDLE_CODE(InvalidRange);
+ HANDLE_CODE(InvalidSecurity);
+ HANDLE_CODE(InvalidSOAPRequest);
+ HANDLE_CODE(InvalidStorageClass);
+ HANDLE_CODE(InvalidTargetBucketForLogging);
+ HANDLE_CODE(InvalidToken);
+ HANDLE_CODE(InvalidURI);
+ HANDLE_CODE(KeyTooLong);
+ HANDLE_CODE(MalformedACLError);
+ HANDLE_CODE(MalformedXML);
+ HANDLE_CODE(MaxMessageLengthExceeded);
+ HANDLE_CODE(MaxPostPreDataLengthExceededError);
+ HANDLE_CODE(MetadataTooLarge);
+ HANDLE_CODE(MethodNotAllowed);
+ HANDLE_CODE(MissingAttachment);
+ HANDLE_CODE(MissingContentLength);
+ HANDLE_CODE(MissingSecurityElement);
+ HANDLE_CODE(MissingSecurityHeader);
+ HANDLE_CODE(NoLoggingStatusForKey);
+ HANDLE_CODE(NoSuchBucket);
+ HANDLE_CODE(NoSuchKey);
+ HANDLE_CODE(NotImplemented);
+ HANDLE_CODE(NotSignedUp);
+ HANDLE_CODE(OperationAborted);
+ HANDLE_CODE(PermanentRedirect);
+ HANDLE_CODE(PreconditionFailed);
+ HANDLE_CODE(Redirect);
+ HANDLE_CODE(RequestIsNotMultiPartContent);
+ HANDLE_CODE(RequestTimeout);
+ HANDLE_CODE(RequestTimeTooSkewed);
+ HANDLE_CODE(RequestTorrentOfBucketError);
+ HANDLE_CODE(SignatureDoesNotMatch);
+ HANDLE_CODE(SlowDown);
+ HANDLE_CODE(TemporaryRedirect);
+ HANDLE_CODE(TokenRefreshRequired);
+ HANDLE_CODE(TooManyBuckets);
+ HANDLE_CODE(UnexpectedContent);
+ HANDLE_CODE(UnresolvableGrantByEmailAddress);
+ HANDLE_CODE(UserKeyMustBeSpecified);
+ *status = S3StatusErrorUnknown;
+
+ code_set:
+
+ return;
+}
+
+
+// Always call this
+void error_parser_deinitialize(ErrorParser *errorParser)
+{
+ if (errorParser->errorXmlParserInitialized) {
+ simplexml_deinitialize(&(errorParser->errorXmlParser));
+ }
+}
--- /dev/null
+/** **************************************************************************
+ * general.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#include <ctype.h>
+#include <string.h>
+#include "request.h"
+#include "simplexml.h"
+#include "util.h"
+
+static int initializeCountG = 0;
+
+S3Status S3_initialize(const char *userAgentInfo, int flags)
+{
+ if (initializeCountG++) {
+ return S3StatusOK;
+ }
+
+ return request_api_initialize(userAgentInfo, flags);
+}
+
+
+void S3_deinitialize()
+{
+ if (--initializeCountG) {
+ return;
+ }
+
+ request_api_deinitialize();
+}
+
+const char *S3_get_status_name(S3Status status)
+{
+ switch (status) {
+#define handlecase(s) \
+ case S3Status##s: \
+ return #s
+
+ handlecase(OK);
+ handlecase(InternalError);
+ handlecase(OutOfMemory);
+ handlecase(Interrupted);
+ handlecase(InvalidBucketNameTooLong);
+ handlecase(InvalidBucketNameFirstCharacter);
+ handlecase(InvalidBucketNameCharacter);
+ handlecase(InvalidBucketNameCharacterSequence);
+ handlecase(InvalidBucketNameTooShort);
+ handlecase(InvalidBucketNameDotQuadNotation);
+ handlecase(QueryParamsTooLong);
+ handlecase(FailedToInitializeRequest);
+ handlecase(MetaDataHeadersTooLong);
+ handlecase(BadMetaData);
+ handlecase(BadContentType);
+ handlecase(ContentTypeTooLong);
+ handlecase(BadMD5);
+ handlecase(MD5TooLong);
+ handlecase(BadCacheControl);
+ handlecase(CacheControlTooLong);
+ handlecase(BadContentDispositionFilename);
+ handlecase(ContentDispositionFilenameTooLong);
+ handlecase(BadContentEncoding);
+ handlecase(ContentEncodingTooLong);
+ handlecase(BadIfMatchETag);
+ handlecase(IfMatchETagTooLong);
+ handlecase(BadIfNotMatchETag);
+ handlecase(IfNotMatchETagTooLong);
+ handlecase(HeadersTooLong);
+ handlecase(KeyTooLong);
+ handlecase(UriTooLong);
+ handlecase(XmlParseFailure);
+ handlecase(EmailAddressTooLong);
+ handlecase(UserIdTooLong);
+ handlecase(UserDisplayNameTooLong);
+ handlecase(GroupUriTooLong);
+ handlecase(PermissionTooLong);
+ handlecase(TargetBucketTooLong);
+ handlecase(TargetPrefixTooLong);
+ handlecase(TooManyGrants);
+ handlecase(BadGrantee);
+ handlecase(BadPermission);
+ handlecase(XmlDocumentTooLarge);
+ handlecase(NameLookupError);
+ handlecase(FailedToConnect);
+ handlecase(ServerFailedVerification);
+ handlecase(ConnectionFailed);
+ handlecase(AbortedByCallback);
+ handlecase(ErrorAccessDenied);
+ handlecase(ErrorAccountProblem);
+ handlecase(ErrorAmbiguousGrantByEmailAddress);
+ handlecase(ErrorBadDigest);
+ handlecase(ErrorBucketAlreadyExists);
+ handlecase(ErrorBucketAlreadyOwnedByYou);
+ handlecase(ErrorBucketNotEmpty);
+ handlecase(ErrorCredentialsNotSupported);
+ handlecase(ErrorCrossLocationLoggingProhibited);
+ handlecase(ErrorEntityTooSmall);
+ handlecase(ErrorEntityTooLarge);
+ handlecase(ErrorExpiredToken);
+ handlecase(ErrorIncompleteBody);
+ handlecase(ErrorIncorrectNumberOfFilesInPostRequest);
+ handlecase(ErrorInlineDataTooLarge);
+ handlecase(ErrorInternalError);
+ handlecase(ErrorInvalidAccessKeyId);
+ handlecase(ErrorInvalidAddressingHeader);
+ handlecase(ErrorInvalidArgument);
+ handlecase(ErrorInvalidBucketName);
+ handlecase(ErrorInvalidDigest);
+ handlecase(ErrorInvalidLocationConstraint);
+ handlecase(ErrorInvalidPayer);
+ handlecase(ErrorInvalidPolicyDocument);
+ handlecase(ErrorInvalidRange);
+ handlecase(ErrorInvalidSecurity);
+ handlecase(ErrorInvalidSOAPRequest);
+ handlecase(ErrorInvalidStorageClass);
+ handlecase(ErrorInvalidTargetBucketForLogging);
+ handlecase(ErrorInvalidToken);
+ handlecase(ErrorInvalidURI);
+ handlecase(ErrorKeyTooLong);
+ handlecase(ErrorMalformedACLError);
+ handlecase(ErrorMalformedXML);
+ handlecase(ErrorMaxMessageLengthExceeded);
+ handlecase(ErrorMaxPostPreDataLengthExceededError);
+ handlecase(ErrorMetadataTooLarge);
+ handlecase(ErrorMethodNotAllowed);
+ handlecase(ErrorMissingAttachment);
+ handlecase(ErrorMissingContentLength);
+ handlecase(ErrorMissingSecurityElement);
+ handlecase(ErrorMissingSecurityHeader);
+ handlecase(ErrorNoLoggingStatusForKey);
+ handlecase(ErrorNoSuchBucket);
+ handlecase(ErrorNoSuchKey);
+ handlecase(ErrorNotImplemented);
+ handlecase(ErrorNotSignedUp);
+ handlecase(ErrorOperationAborted);
+ handlecase(ErrorPermanentRedirect);
+ handlecase(ErrorPreconditionFailed);
+ handlecase(ErrorRedirect);
+ handlecase(ErrorRequestIsNotMultiPartContent);
+ handlecase(ErrorRequestTimeout);
+ handlecase(ErrorRequestTimeTooSkewed);
+ handlecase(ErrorRequestTorrentOfBucketError);
+ handlecase(ErrorSignatureDoesNotMatch);
+ handlecase(ErrorSlowDown);
+ handlecase(ErrorTemporaryRedirect);
+ handlecase(ErrorTokenRefreshRequired);
+ handlecase(ErrorTooManyBuckets);
+ handlecase(ErrorUnexpectedContent);
+ handlecase(ErrorUnresolvableGrantByEmailAddress);
+ handlecase(ErrorUserKeyMustBeSpecified);
+ handlecase(ErrorUnknown);
+ handlecase(HttpErrorMovedTemporarily);
+ handlecase(HttpErrorBadRequest);
+ handlecase(HttpErrorForbidden);
+ handlecase(HttpErrorNotFound);
+ handlecase(HttpErrorConflict);
+ handlecase(HttpErrorUnknown);
+ }
+
+ return "Unknown";
+}
+
+
+S3Status S3_validate_bucket_name(const char *bucketName, S3UriStyle uriStyle)
+{
+ int virtualHostStyle = (uriStyle == S3UriStyleVirtualHost);
+ int len = 0, maxlen = virtualHostStyle ? 63 : 255;
+ const char *b = bucketName;
+
+ int hasDot = 0;
+ int hasNonDigit = 0;
+
+ while (*b) {
+ if (len == maxlen) {
+ return S3StatusInvalidBucketNameTooLong;
+ }
+ else if (isalpha(*b)) {
+ len++, b++;
+ hasNonDigit = 1;
+ }
+ else if (isdigit(*b)) {
+ len++, b++;
+ }
+ else if (len == 0) {
+ return S3StatusInvalidBucketNameFirstCharacter;
+ }
+ else if (*b == '_') {
+ /* Virtual host style bucket names cannot have underscores */
+ if (virtualHostStyle) {
+ return S3StatusInvalidBucketNameCharacter;
+ }
+ len++, b++;
+ hasNonDigit = 1;
+ }
+ else if (*b == '-') {
+ /* Virtual host style bucket names cannot have .- */
+ if (virtualHostStyle && (b > bucketName) && (*(b - 1) == '.')) {
+ return S3StatusInvalidBucketNameCharacterSequence;
+ }
+ len++, b++;
+ hasNonDigit = 1;
+ }
+ else if (*b == '.') {
+ /* Virtual host style bucket names cannot have -. */
+ if (virtualHostStyle && (b > bucketName) && (*(b - 1) == '-')) {
+ return S3StatusInvalidBucketNameCharacterSequence;
+ }
+ len++, b++;
+ hasDot = 1;
+ }
+ else {
+ return S3StatusInvalidBucketNameCharacter;
+ }
+ }
+
+ if (len < 3) {
+ return S3StatusInvalidBucketNameTooShort;
+ }
+
+ /* It's not clear from Amazon's documentation exactly what 'IP address
+ style' means. In its strictest sense, it could mean 'could be a valid
+ IP address', which would mean that 255.255.255.255 would be invalid,
+ wherase 256.256.256.256 would be valid. Or it could mean 'has 4 sets
+ of digits separated by dots'. Who knows. Let's just be really
+ conservative here: if it has any dots, and no non-digit characters,
+ then we reject it */
+ if (hasDot && !hasNonDigit) {
+ return S3StatusInvalidBucketNameDotQuadNotation;
+ }
+
+ return S3StatusOK;
+}
+
+
+typedef struct ConvertAclData
+{
+ char *ownerId;
+ int ownerIdLen;
+ char *ownerDisplayName;
+ int ownerDisplayNameLen;
+ int *aclGrantCountReturn;
+ S3AclGrant *aclGrants;
+
+ string_buffer(emailAddress, S3_MAX_GRANTEE_EMAIL_ADDRESS_SIZE);
+ string_buffer(userId, S3_MAX_GRANTEE_USER_ID_SIZE);
+ string_buffer(userDisplayName, S3_MAX_GRANTEE_DISPLAY_NAME_SIZE);
+ string_buffer(groupUri, 128);
+ string_buffer(permission, 32);
+} ConvertAclData;
+
+
+static S3Status convertAclXmlCallback(const char *elementPath,
+ const char *data, int dataLen,
+ void *callbackData)
+{
+ ConvertAclData *caData = (ConvertAclData *) callbackData;
+
+ int fit;
+
+ if (data) {
+ if (!strcmp(elementPath, "AccessControlPolicy/Owner/ID")) {
+ caData->ownerIdLen +=
+ snprintf(&(caData->ownerId[caData->ownerIdLen]),
+ S3_MAX_GRANTEE_USER_ID_SIZE - caData->ownerIdLen - 1,
+ "%.*s", dataLen, data);
+ if (caData->ownerIdLen >= S3_MAX_GRANTEE_USER_ID_SIZE) {
+ return S3StatusUserIdTooLong;
+ }
+ }
+ else if (!strcmp(elementPath, "AccessControlPolicy/Owner/"
+ "DisplayName")) {
+ caData->ownerDisplayNameLen +=
+ snprintf(&(caData->ownerDisplayName
+ [caData->ownerDisplayNameLen]),
+ S3_MAX_GRANTEE_DISPLAY_NAME_SIZE -
+ caData->ownerDisplayNameLen - 1,
+ "%.*s", dataLen, data);
+ if (caData->ownerDisplayNameLen >=
+ S3_MAX_GRANTEE_DISPLAY_NAME_SIZE) {
+ return S3StatusUserDisplayNameTooLong;
+ }
+ }
+ else if (!strcmp(elementPath,
+ "AccessControlPolicy/AccessControlList/Grant/"
+ "Grantee/EmailAddress")) {
+ // AmazonCustomerByEmail
+ string_buffer_append(caData->emailAddress, data, dataLen, fit);
+ if (!fit) {
+ return S3StatusEmailAddressTooLong;
+ }
+ }
+ else if (!strcmp(elementPath,
+ "AccessControlPolicy/AccessControlList/Grant/"
+ "Grantee/ID")) {
+ // CanonicalUser
+ string_buffer_append(caData->userId, data, dataLen, fit);
+ if (!fit) {
+ return S3StatusUserIdTooLong;
+ }
+ }
+ else if (!strcmp(elementPath,
+ "AccessControlPolicy/AccessControlList/Grant/"
+ "Grantee/DisplayName")) {
+ // CanonicalUser
+ string_buffer_append(caData->userDisplayName, data, dataLen, fit);
+ if (!fit) {
+ return S3StatusUserDisplayNameTooLong;
+ }
+ }
+ else if (!strcmp(elementPath,
+ "AccessControlPolicy/AccessControlList/Grant/"
+ "Grantee/URI")) {
+ // Group
+ string_buffer_append(caData->groupUri, data, dataLen, fit);
+ if (!fit) {
+ return S3StatusGroupUriTooLong;
+ }
+ }
+ else if (!strcmp(elementPath,
+ "AccessControlPolicy/AccessControlList/Grant/"
+ "Permission")) {
+ // Permission
+ string_buffer_append(caData->permission, data, dataLen, fit);
+ if (!fit) {
+ return S3StatusPermissionTooLong;
+ }
+ }
+ }
+ else {
+ if (!strcmp(elementPath, "AccessControlPolicy/AccessControlList/"
+ "Grant")) {
+ // A grant has just been completed; so add the next S3AclGrant
+ // based on the values read
+ if (*(caData->aclGrantCountReturn) == S3_MAX_ACL_GRANT_COUNT) {
+ return S3StatusTooManyGrants;
+ }
+
+ S3AclGrant *grant = &(caData->aclGrants
+ [*(caData->aclGrantCountReturn)]);
+
+ if (caData->emailAddress[0]) {
+ grant->granteeType = S3GranteeTypeAmazonCustomerByEmail;
+ strcpy(grant->grantee.amazonCustomerByEmail.emailAddress,
+ caData->emailAddress);
+ }
+ else if (caData->userId[0] && caData->userDisplayName[0]) {
+ grant->granteeType = S3GranteeTypeCanonicalUser;
+ strcpy(grant->grantee.canonicalUser.id, caData->userId);
+ strcpy(grant->grantee.canonicalUser.displayName,
+ caData->userDisplayName);
+ }
+ else if (caData->groupUri[0]) {
+ if (!strcmp(caData->groupUri,
+ "http://acs.amazonaws.com/groups/global/"
+ "AuthenticatedUsers")) {
+ grant->granteeType = S3GranteeTypeAllAwsUsers;
+ }
+ else if (!strcmp(caData->groupUri,
+ "http://acs.amazonaws.com/groups/global/"
+ "AllUsers")) {
+ grant->granteeType = S3GranteeTypeAllUsers;
+ }
+ else if (!strcmp(caData->groupUri,
+ "http://acs.amazonaws.com/groups/s3/"
+ "LogDelivery")) {
+ grant->granteeType = S3GranteeTypeLogDelivery;
+ }
+ else {
+ return S3StatusBadGrantee;
+ }
+ }
+ else {
+ return S3StatusBadGrantee;
+ }
+
+ if (!strcmp(caData->permission, "READ")) {
+ grant->permission = S3PermissionRead;
+ }
+ else if (!strcmp(caData->permission, "WRITE")) {
+ grant->permission = S3PermissionWrite;
+ }
+ else if (!strcmp(caData->permission, "READ_ACP")) {
+ grant->permission = S3PermissionReadACP;
+ }
+ else if (!strcmp(caData->permission, "WRITE_ACP")) {
+ grant->permission = S3PermissionWriteACP;
+ }
+ else if (!strcmp(caData->permission, "FULL_CONTROL")) {
+ grant->permission = S3PermissionFullControl;
+ }
+ else {
+ return S3StatusBadPermission;
+ }
+
+ (*(caData->aclGrantCountReturn))++;
+
+ string_buffer_initialize(caData->emailAddress);
+ string_buffer_initialize(caData->userId);
+ string_buffer_initialize(caData->userDisplayName);
+ string_buffer_initialize(caData->groupUri);
+ string_buffer_initialize(caData->permission);
+ }
+ }
+
+ return S3StatusOK;
+}
+
+
+S3Status S3_convert_acl(char *aclXml, char *ownerId, char *ownerDisplayName,
+ int *aclGrantCountReturn, S3AclGrant *aclGrants)
+{
+ ConvertAclData data;
+
+ data.ownerId = ownerId;
+ data.ownerIdLen = 0;
+ data.ownerId[0] = 0;
+ data.ownerDisplayName = ownerDisplayName;
+ data.ownerDisplayNameLen = 0;
+ data.ownerDisplayName[0] = 0;
+ data.aclGrantCountReturn = aclGrantCountReturn;
+ data.aclGrants = aclGrants;
+ *aclGrantCountReturn = 0;
+ string_buffer_initialize(data.emailAddress);
+ string_buffer_initialize(data.userId);
+ string_buffer_initialize(data.userDisplayName);
+ string_buffer_initialize(data.groupUri);
+ string_buffer_initialize(data.permission);
+
+ // Use a simplexml parser
+ SimpleXml simpleXml;
+ simplexml_initialize(&simpleXml, &convertAclXmlCallback, &data);
+
+ S3Status status = simplexml_add(&simpleXml, aclXml, strlen(aclXml));
+
+ simplexml_deinitialize(&simpleXml);
+
+ return status;
+}
+
+
+int S3_status_is_retryable(S3Status status)
+{
+ switch (status) {
+ case S3StatusNameLookupError:
+ case S3StatusFailedToConnect:
+ case S3StatusConnectionFailed:
+ case S3StatusErrorInternalError:
+ case S3StatusErrorOperationAborted:
+ case S3StatusErrorRequestTimeout:
+ return 1;
+ default:
+ return 0;
+ }
+}
--- /dev/null
+/** **************************************************************************
+ * mingw_functions.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#include <pthread.h>
+#include <sys/utsname.h>
+
+unsigned long pthread_self()
+{
+ return (unsigned long) GetCurrentThreadId();
+}
+
+
+int pthread_mutex_init(pthread_mutex_t *mutex, void *v)
+{
+ (void) v;
+
+ InitializeCriticalSection(&(mutex->criticalSection));
+
+ return 0;
+}
+
+
+int pthread_mutex_lock(pthread_mutex_t *mutex)
+{
+ EnterCriticalSection(&(mutex->criticalSection));
+
+ return 0;
+}
+
+
+int pthread_mutex_unlock(pthread_mutex_t *mutex)
+{
+ LeaveCriticalSection(&(mutex->criticalSection));
+
+ return 0;
+}
+
+
+int pthread_mutex_destroy(pthread_mutex_t *mutex)
+{
+ DeleteCriticalSection(&(mutex->criticalSection));
+
+ return 0;
+}
+
+
+int uname(struct utsname *u)
+{
+ OSVERSIONINFO info;
+ info.dwOSVersionInfoSize = sizeof(info);
+
+ if (!GetVersionEx(&info)) {
+ return -1;
+ }
+
+ u->machine = "";
+
+ switch (info.dwMajorVersion) {
+ case 4:
+ switch (info.dwMinorVersion) {
+ case 0:
+ u->sysname = "Microsoft Windows NT 4.0";
+ break;
+ case 10:
+ u->sysname = "Microsoft Windows 98";
+ break;
+ case 90:
+ u->sysname = "Microsoft Windows Me";
+ break;
+ default:
+ return -1;
+ }
+ break;
+
+ case 5:
+ switch (info.dwMinorVersion) {
+ case 0:
+ u->sysname = "Microsoft Windows 2000";
+ break;
+ case 1:
+ u->sysname = "Microsoft Windows XP";
+ break;
+ case 2:
+ u->sysname = "Microsoft Server 2003";
+ break;
+ default:
+ return -1;
+ }
+ break;
+
+ default:
+ return -1;
+ }
+
+ return 0;
+}
--- /dev/null
+/** **************************************************************************
+ * mingw_s3_functions.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+int setenv(const char *a, const char *b, int c)
+{
+ (void) c;
+
+ return SetEnvironmentVariable(a, b);
+}
+
+int unsetenv(const char *a)
+{
+ return SetEnvironmentVariable(a, 0);
+}
--- /dev/null
+/** **************************************************************************
+ * object.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#include <stdlib.h>
+#include <string.h>
+#include "libs3.h"
+#include "request.h"
+
+
+// put object ----------------------------------------------------------------
+
+void S3_put_object(const S3BucketContext *bucketContext, const char *key,
+ uint64_t contentLength,
+ const S3PutProperties *putProperties,
+ S3RequestContext *requestContext,
+ const S3PutObjectHandler *handler, void *callbackData)
+{
+ // Set up the RequestParams
+ RequestParams params =
+ {
+ HttpRequestTypePUT, // httpRequestType
+ { bucketContext->bucketName, // bucketName
+ bucketContext->protocol, // protocol
+ bucketContext->uriStyle, // uriStyle
+ bucketContext->accessKeyId, // accessKeyId
+ bucketContext->secretAccessKey }, // secretAccessKey
+ key, // key
+ 0, // queryParams
+ 0, // subResource
+ 0, // copySourceBucketName
+ 0, // copySourceKey
+ 0, // getConditions
+ 0, // startByte
+ 0, // byteCount
+ putProperties, // putProperties
+ handler->responseHandler.propertiesCallback, // propertiesCallback
+ handler->putObjectDataCallback, // toS3Callback
+ contentLength, // toS3CallbackTotalSize
+ 0, // fromS3Callback
+ handler->responseHandler.completeCallback, // completeCallback
+ callbackData // callbackData
+ };
+
+ // Perform the request
+ request_perform(¶ms, requestContext);
+}
+
+
+// copy object ---------------------------------------------------------------
+
+
+typedef struct CopyObjectData
+{
+ SimpleXml simpleXml;
+
+ S3ResponsePropertiesCallback *responsePropertiesCallback;
+ S3ResponseCompleteCallback *responseCompleteCallback;
+ void *callbackData;
+
+ int64_t *lastModifiedReturn;
+ int eTagReturnSize;
+ char *eTagReturn;
+ int eTagReturnLen;
+
+ string_buffer(lastModified, 256);
+} CopyObjectData;
+
+
+static S3Status copyObjectXmlCallback(const char *elementPath,
+ const char *data, int dataLen,
+ void *callbackData)
+{
+ CopyObjectData *coData = (CopyObjectData *) callbackData;
+
+ int fit;
+
+ if (data) {
+ if (!strcmp(elementPath, "CopyObjectResult/LastModified")) {
+ string_buffer_append(coData->lastModified, data, dataLen, fit);
+ }
+ else if (!strcmp(elementPath, "CopyObjectResult/ETag")) {
+ if (coData->eTagReturnSize && coData->eTagReturn) {
+ coData->eTagReturnLen +=
+ snprintf(&(coData->eTagReturn[coData->eTagReturnLen]),
+ coData->eTagReturnSize -
+ coData->eTagReturnLen - 1,
+ "%.*s", dataLen, data);
+ if (coData->eTagReturnLen >= coData->eTagReturnSize) {
+ return S3StatusXmlParseFailure;
+ }
+ }
+ }
+ }
+
+ return S3StatusOK;
+}
+
+
+static S3Status copyObjectPropertiesCallback
+ (const S3ResponseProperties *responseProperties, void *callbackData)
+{
+ CopyObjectData *coData = (CopyObjectData *) callbackData;
+
+ return (*(coData->responsePropertiesCallback))
+ (responseProperties, coData->callbackData);
+}
+
+
+static S3Status copyObjectDataCallback(int bufferSize, const char *buffer,
+ void *callbackData)
+{
+ CopyObjectData *coData = (CopyObjectData *) callbackData;
+
+ return simplexml_add(&(coData->simpleXml), buffer, bufferSize);
+}
+
+
+static void copyObjectCompleteCallback(S3Status requestStatus,
+ const S3ErrorDetails *s3ErrorDetails,
+ void *callbackData)
+{
+ CopyObjectData *coData = (CopyObjectData *) callbackData;
+
+ if (coData->lastModifiedReturn) {
+ time_t lastModified = -1;
+ if (coData->lastModifiedLen) {
+ lastModified = parseIso8601Time(coData->lastModified);
+ }
+
+ *(coData->lastModifiedReturn) = lastModified;
+ }
+
+ (*(coData->responseCompleteCallback))
+ (requestStatus, s3ErrorDetails, coData->callbackData);
+
+ simplexml_deinitialize(&(coData->simpleXml));
+
+ free(coData);
+}
+
+
+void S3_copy_object(const S3BucketContext *bucketContext, const char *key,
+ const char *destinationBucket, const char *destinationKey,
+ const S3PutProperties *putProperties,
+ int64_t *lastModifiedReturn, int eTagReturnSize,
+ char *eTagReturn, S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData)
+{
+ // Create the callback data
+ CopyObjectData *data =
+ (CopyObjectData *) malloc(sizeof(CopyObjectData));
+ if (!data) {
+ (*(handler->completeCallback))(S3StatusOutOfMemory, 0, callbackData);
+ return;
+ }
+
+ simplexml_initialize(&(data->simpleXml), ©ObjectXmlCallback, data);
+
+ data->responsePropertiesCallback = handler->propertiesCallback;
+ data->responseCompleteCallback = handler->completeCallback;
+ data->callbackData = callbackData;
+
+ data->lastModifiedReturn = lastModifiedReturn;
+ data->eTagReturnSize = eTagReturnSize;
+ data->eTagReturn = eTagReturn;
+ if (data->eTagReturnSize && data->eTagReturn) {
+ data->eTagReturn[0] = 0;
+ }
+ data->eTagReturnLen = 0;
+ string_buffer_initialize(data->lastModified);
+
+ // Set up the RequestParams
+ RequestParams params =
+ {
+ HttpRequestTypeCOPY, // httpRequestType
+ { destinationBucket ? destinationBucket :
+ bucketContext->bucketName, // bucketName
+ bucketContext->protocol, // protocol
+ bucketContext->uriStyle, // uriStyle
+ bucketContext->accessKeyId, // accessKeyId
+ bucketContext->secretAccessKey }, // secretAccessKey
+ destinationKey ? destinationKey : key, // key
+ 0, // queryParams
+ 0, // subResource
+ bucketContext->bucketName, // copySourceBucketName
+ key, // copySourceKey
+ 0, // getConditions
+ 0, // startByte
+ 0, // byteCount
+ putProperties, // putProperties
+ ©ObjectPropertiesCallback, // propertiesCallback
+ 0, // toS3Callback
+ 0, // toS3CallbackTotalSize
+ ©ObjectDataCallback, // fromS3Callback
+ ©ObjectCompleteCallback, // completeCallback
+ data // callbackData
+ };
+
+ // Perform the request
+ request_perform(¶ms, requestContext);
+}
+
+
+// get object ----------------------------------------------------------------
+
+void S3_get_object(const S3BucketContext *bucketContext, const char *key,
+ const S3GetConditions *getConditions,
+ uint64_t startByte, uint64_t byteCount,
+ S3RequestContext *requestContext,
+ const S3GetObjectHandler *handler, void *callbackData)
+{
+ // Set up the RequestParams
+ RequestParams params =
+ {
+ HttpRequestTypeGET, // httpRequestType
+ { bucketContext->bucketName, // bucketName
+ bucketContext->protocol, // protocol
+ bucketContext->uriStyle, // uriStyle
+ bucketContext->accessKeyId, // accessKeyId
+ bucketContext->secretAccessKey }, // secretAccessKey
+ key, // key
+ 0, // queryParams
+ 0, // subResource
+ 0, // copySourceBucketName
+ 0, // copySourceKey
+ getConditions, // getConditions
+ startByte, // startByte
+ byteCount, // byteCount
+ 0, // putProperties
+ handler->responseHandler.propertiesCallback, // propertiesCallback
+ 0, // toS3Callback
+ 0, // toS3CallbackTotalSize
+ handler->getObjectDataCallback, // fromS3Callback
+ handler->responseHandler.completeCallback, // completeCallback
+ callbackData // callbackData
+ };
+
+ // Perform the request
+ request_perform(¶ms, requestContext);
+}
+
+
+// head object ---------------------------------------------------------------
+
+void S3_head_object(const S3BucketContext *bucketContext, const char *key,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData)
+{
+ // Set up the RequestParams
+ RequestParams params =
+ {
+ HttpRequestTypeHEAD, // httpRequestType
+ { bucketContext->bucketName, // bucketName
+ bucketContext->protocol, // protocol
+ bucketContext->uriStyle, // uriStyle
+ bucketContext->accessKeyId, // accessKeyId
+ bucketContext->secretAccessKey }, // secretAccessKey
+ key, // key
+ 0, // queryParams
+ 0, // subResource
+ 0, // copySourceBucketName
+ 0, // copySourceKey
+ 0, // getConditions
+ 0, // startByte
+ 0, // byteCount
+ 0, // putProperties
+ handler->propertiesCallback, // propertiesCallback
+ 0, // toS3Callback
+ 0, // toS3CallbackTotalSize
+ 0, // fromS3Callback
+ handler->completeCallback, // completeCallback
+ callbackData // callbackData
+ };
+
+ // Perform the request
+ request_perform(¶ms, requestContext);
+}
+
+
+// delete object --------------------------------------------------------------
+
+void S3_delete_object(const S3BucketContext *bucketContext, const char *key,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler, void *callbackData)
+{
+ // Set up the RequestParams
+ RequestParams params =
+ {
+ HttpRequestTypeDELETE, // httpRequestType
+ { bucketContext->bucketName, // bucketName
+ bucketContext->protocol, // protocol
+ bucketContext->uriStyle, // uriStyle
+ bucketContext->accessKeyId, // accessKeyId
+ bucketContext->secretAccessKey }, // secretAccessKey
+ key, // key
+ 0, // queryParams
+ 0, // subResource
+ 0, // copySourceBucketName
+ 0, // copySourceKey
+ 0, // getConditions
+ 0, // startByte
+ 0, // byteCount
+ 0, // putProperties
+ handler->propertiesCallback, // propertiesCallback
+ 0, // toS3Callback
+ 0, // toS3CallbackTotalSize
+ 0, // fromS3Callback
+ handler->completeCallback, // completeCallback
+ callbackData // callbackData
+ };
+
+ // Perform the request
+ request_perform(¶ms, requestContext);
+}
--- /dev/null
+/** **************************************************************************
+ * request.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#include <ctype.h>
+#include <pthread.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/utsname.h>
+#include "request.h"
+#include "request_context.h"
+#include "response_headers_handler.h"
+#include "util.h"
+
+
+#define USER_AGENT_SIZE 256
+#define REQUEST_STACK_SIZE 32
+
+static char userAgentG[USER_AGENT_SIZE];
+
+static pthread_mutex_t requestStackMutexG;
+
+static Request *requestStackG[REQUEST_STACK_SIZE];
+
+static int requestStackCountG;
+
+
+typedef struct RequestComputedValues
+{
+ // All x-amz- headers, in normalized form (i.e. NAME: VALUE, no other ws)
+ char *amzHeaders[S3_MAX_METADATA_COUNT + 2]; // + 2 for acl and date
+
+ // The number of x-amz- headers
+ int amzHeadersCount;
+
+ // Storage for amzHeaders (the +256 is for x-amz-acl and x-amz-date)
+ char amzHeadersRaw[COMPACTED_METADATA_BUFFER_SIZE + 256 + 1];
+
+ // Canonicalized x-amz- headers
+ string_multibuffer(canonicalizedAmzHeaders,
+ COMPACTED_METADATA_BUFFER_SIZE + 256 + 1);
+
+ // URL-Encoded key
+ char urlEncodedKey[MAX_URLENCODED_KEY_SIZE + 1];
+
+ // Canonicalized resource
+ char canonicalizedResource[MAX_CANONICALIZED_RESOURCE_SIZE + 1];
+
+ // Cache-Control header (or empty)
+ char cacheControlHeader[128];
+
+ // Content-Type header (or empty)
+ char contentTypeHeader[128];
+
+ // Content-MD5 header (or empty)
+ char md5Header[128];
+
+ // Content-Disposition header (or empty)
+ char contentDispositionHeader[128];
+
+ // Content-Encoding header (or empty)
+ char contentEncodingHeader[128];
+
+ // Expires header (or empty)
+ char expiresHeader[128];
+
+ // If-Modified-Since header
+ char ifModifiedSinceHeader[128];
+
+ // If-Unmodified-Since header
+ char ifUnmodifiedSinceHeader[128];
+
+ // If-Match header
+ char ifMatchHeader[128];
+
+ // If-None-Match header
+ char ifNoneMatchHeader[128];
+
+ // Range header
+ char rangeHeader[128];
+
+ // Authorization header
+ char authorizationHeader[128];
+} RequestComputedValues;
+
+
+// Called whenever we detect that the request headers have been completely
+// processed; which happens either when we get our first read/write callback,
+// or the request is finished being procesed. Returns nonzero on success,
+// zero on failure.
+static void request_headers_done(Request *request)
+{
+ if (request->propertiesCallbackMade) {
+ return;
+ }
+
+ request->propertiesCallbackMade = 1;
+
+ // Get the http response code
+ long httpResponseCode;
+ request->httpResponseCode = 0;
+ if (curl_easy_getinfo(request->curl, CURLINFO_RESPONSE_CODE,
+ &httpResponseCode) != CURLE_OK) {
+ // Not able to get the HTTP response code - error
+ request->status = S3StatusInternalError;
+ return;
+ }
+ else {
+ request->httpResponseCode = httpResponseCode;
+ }
+
+ response_headers_handler_done(&(request->responseHeadersHandler),
+ request->curl);
+
+ // Only make the callback if it was a successful request; otherwise we're
+ // returning information about the error response itself
+ if (request->propertiesCallback &&
+ (request->httpResponseCode >= 200) &&
+ (request->httpResponseCode <= 299)) {
+ request->status = (*(request->propertiesCallback))
+ (&(request->responseHeadersHandler.responseProperties),
+ request->callbackData);
+ }
+}
+
+
+static size_t curl_header_func(void *ptr, size_t size, size_t nmemb,
+ void *data)
+{
+ Request *request = (Request *) data;
+
+ int len = size * nmemb;
+
+ response_headers_handler_add
+ (&(request->responseHeadersHandler), (char *) ptr, len);
+
+ return len;
+}
+
+
+static size_t curl_read_func(void *ptr, size_t size, size_t nmemb, void *data)
+{
+ Request *request = (Request *) data;
+
+ int len = size * nmemb;
+
+ request_headers_done(request);
+
+ if (request->status != S3StatusOK) {
+ return CURL_READFUNC_ABORT;
+ }
+
+ // If there is no data callback, or the data callback has already returned
+ // contentLength bytes, return 0;
+ if (!request->toS3Callback || !request->toS3CallbackBytesRemaining) {
+ return 0;
+ }
+
+ // Don't tell the callback that we are willing to accept more data than we
+ // really are
+ if (len > request->toS3CallbackBytesRemaining) {
+ len = request->toS3CallbackBytesRemaining;
+ }
+
+ // Otherwise, make the data callback
+ int ret = (*(request->toS3Callback))
+ (len, (char *) ptr, request->callbackData);
+ if (ret < 0) {
+ request->status = S3StatusAbortedByCallback;
+ return CURL_READFUNC_ABORT;
+ }
+ else {
+ if (ret > request->toS3CallbackBytesRemaining) {
+ ret = request->toS3CallbackBytesRemaining;
+ }
+ request->toS3CallbackBytesRemaining -= ret;
+ return ret;
+ }
+}
+
+
+static size_t curl_write_func(void *ptr, size_t size, size_t nmemb,
+ void *data)
+{
+ Request *request = (Request *) data;
+
+ int len = size * nmemb;
+
+ request_headers_done(request);
+
+ if (request->status != S3StatusOK) {
+ return 0;
+ }
+
+ // On HTTP error, we expect to parse an HTTP error response
+ if ((request->httpResponseCode < 200) ||
+ (request->httpResponseCode > 299)) {
+ request->status = error_parser_add
+ (&(request->errorParser), (char *) ptr, len);
+ }
+ // If there was a callback registered, make it
+ else if (request->fromS3Callback) {
+ request->status = (*(request->fromS3Callback))
+ (len, (char *) ptr, request->callbackData);
+ }
+ // Else, consider this an error - S3 has sent back data when it was not
+ // expected
+ else {
+ request->status = S3StatusInternalError;
+ }
+
+ return ((request->status == S3StatusOK) ? len : 0);
+}
+
+
+// This function 'normalizes' all x-amz-meta headers provided in
+// params->requestHeaders, which means it removes all whitespace from
+// them such that they all look exactly like this:
+// x-amz-meta-${NAME}: ${VALUE}
+// It also adds the x-amz-acl, x-amz-copy-source, and x-amz-metadata-directive
+// headers if necessary, and always adds the x-amz-date header. It copies the
+// raw string values into params->amzHeadersRaw, and creates an array of
+// string pointers representing these headers in params->amzHeaders (and also
+// sets params->amzHeadersCount to be the count of the total number of x-amz-
+// headers thus created).
+static S3Status compose_amz_headers(const RequestParams *params,
+ RequestComputedValues *values)
+{
+ const S3PutProperties *properties = params->putProperties;
+
+ values->amzHeadersCount = 0;
+ values->amzHeadersRaw[0] = 0;
+ int len = 0;
+
+ // Append a header to amzHeaders, trimming whitespace from the end.
+ // Does NOT trim whitespace from the beginning.
+#define headers_append(isNewHeader, format, ...) \
+ do { \
+ if (isNewHeader) { \
+ values->amzHeaders[values->amzHeadersCount++] = \
+ &(values->amzHeadersRaw[len]); \
+ } \
+ len += snprintf(&(values->amzHeadersRaw[len]), \
+ sizeof(values->amzHeadersRaw) - len, \
+ format, __VA_ARGS__); \
+ if (len >= (int) sizeof(values->amzHeadersRaw)) { \
+ return S3StatusMetaDataHeadersTooLong; \
+ } \
+ while ((len > 0) && (values->amzHeadersRaw[len - 1] == ' ')) { \
+ len--; \
+ } \
+ values->amzHeadersRaw[len++] = 0; \
+ } while (0)
+
+#define header_name_tolower_copy(str, l) \
+ do { \
+ values->amzHeaders[values->amzHeadersCount++] = \
+ &(values->amzHeadersRaw[len]); \
+ if ((len + l) >= (int) sizeof(values->amzHeadersRaw)) { \
+ return S3StatusMetaDataHeadersTooLong; \
+ } \
+ int todo = l; \
+ while (todo--) { \
+ if ((*(str) >= 'A') && (*(str) <= 'Z')) { \
+ values->amzHeadersRaw[len++] = 'a' + (*(str) - 'A'); \
+ } \
+ else { \
+ values->amzHeadersRaw[len++] = *(str); \
+ } \
+ (str)++; \
+ } \
+ } while (0)
+
+ // Check and copy in the x-amz-meta headers
+ if (properties) {
+ int i;
+ for (i = 0; i < properties->metaDataCount; i++) {
+ const S3NameValue *property = &(properties->metaData[i]);
+ char headerName[S3_MAX_METADATA_SIZE - sizeof(": v")];
+ int l = snprintf(headerName, sizeof(headerName),
+ S3_METADATA_HEADER_NAME_PREFIX "%s",
+ property->name);
+ char *hn = headerName;
+ header_name_tolower_copy(hn, l);
+ // Copy in the value
+ headers_append(0, ": %s", property->value);
+ }
+
+ // Add the x-amz-acl header, if necessary
+ const char *cannedAclString;
+ switch (params->putProperties->cannedAcl) {
+ case S3CannedAclPrivate:
+ cannedAclString = 0;
+ break;
+ case S3CannedAclPublicRead:
+ cannedAclString = "public-read";
+ break;
+ case S3CannedAclPublicReadWrite:
+ cannedAclString = "public-read-write";
+ break;
+ default: // S3CannedAclAuthenticatedRead
+ cannedAclString = "authenticated-read";
+ break;
+ }
+ if (cannedAclString) {
+ headers_append(1, "x-amz-acl: %s", cannedAclString);
+ }
+ }
+
+ // Add the x-amz-date header
+ time_t now = time(NULL);
+ char date[64];
+ strftime(date, sizeof(date), "%a, %d %b %Y %H:%M:%S GMT", gmtime(&now));
+ headers_append(1, "x-amz-date: %s", date);
+
+ if (params->httpRequestType == HttpRequestTypeCOPY) {
+ // Add the x-amz-copy-source header
+ if (params->copySourceBucketName && params->copySourceBucketName[0] &&
+ params->copySourceKey && params->copySourceKey[0]) {
+ headers_append(1, "x-amz-copy-source: /%s/%s",
+ params->copySourceBucketName,
+ params->copySourceKey);
+ }
+ // And the x-amz-metadata-directive header
+ if (params->putProperties) {
+ headers_append(1, "%s", "x-amz-metadata-directive: REPLACE");
+ }
+ }
+
+ return S3StatusOK;
+}
+
+
+// Composes the other headers
+static S3Status compose_standard_headers(const RequestParams *params,
+ RequestComputedValues *values)
+{
+
+#define do_put_header(fmt, sourceField, destField, badError, tooLongError) \
+ do { \
+ if (params->putProperties && \
+ params->putProperties-> sourceField && \
+ params->putProperties-> sourceField[0]) { \
+ /* Skip whitespace at beginning of val */ \
+ const char *val = params->putProperties-> sourceField; \
+ while (*val && is_blank(*val)) { \
+ val++; \
+ } \
+ if (!*val) { \
+ return badError; \
+ } \
+ /* Compose header, make sure it all fit */ \
+ int len = snprintf(values-> destField, \
+ sizeof(values-> destField), fmt, val); \
+ if (len >= (int) sizeof(values-> destField)) { \
+ return tooLongError; \
+ } \
+ /* Now remove the whitespace at the end */ \
+ while (is_blank(values-> destField[len])) { \
+ len--; \
+ } \
+ values-> destField[len] = 0; \
+ } \
+ else { \
+ values-> destField[0] = 0; \
+ } \
+ } while (0)
+
+#define do_get_header(fmt, sourceField, destField, badError, tooLongError) \
+ do { \
+ if (params->getConditions && \
+ params->getConditions-> sourceField && \
+ params->getConditions-> sourceField[0]) { \
+ /* Skip whitespace at beginning of val */ \
+ const char *val = params->getConditions-> sourceField; \
+ while (*val && is_blank(*val)) { \
+ val++; \
+ } \
+ if (!*val) { \
+ return badError; \
+ } \
+ /* Compose header, make sure it all fit */ \
+ int len = snprintf(values-> destField, \
+ sizeof(values-> destField), fmt, val); \
+ if (len >= (int) sizeof(values-> destField)) { \
+ return tooLongError; \
+ } \
+ /* Now remove the whitespace at the end */ \
+ while (is_blank(values-> destField[len])) { \
+ len--; \
+ } \
+ values-> destField[len] = 0; \
+ } \
+ else { \
+ values-> destField[0] = 0; \
+ } \
+ } while (0)
+
+ // Cache-Control
+ do_put_header("Cache-Control: %s", cacheControl, cacheControlHeader,
+ S3StatusBadCacheControl, S3StatusCacheControlTooLong);
+
+ // ContentType
+ do_put_header("Content-Type: %s", contentType, contentTypeHeader,
+ S3StatusBadContentType, S3StatusContentTypeTooLong);
+
+ // MD5
+ do_put_header("Content-MD5: %s", md5, md5Header, S3StatusBadMD5,
+ S3StatusMD5TooLong);
+
+ // Content-Disposition
+ do_put_header("Content-Disposition: attachment; filename=\"%s\"",
+ contentDispositionFilename, contentDispositionHeader,
+ S3StatusBadContentDispositionFilename,
+ S3StatusContentDispositionFilenameTooLong);
+
+ // ContentEncoding
+ do_put_header("Content-Encoding: %s", contentEncoding,
+ contentEncodingHeader, S3StatusBadContentEncoding,
+ S3StatusContentEncodingTooLong);
+
+ // Expires
+ if (params->putProperties && (params->putProperties->expires >= 0)) {
+ time_t t = (time_t) params->putProperties->expires;
+ strftime(values->expiresHeader, sizeof(values->expiresHeader),
+ "Expires: %a, %d %b %Y %H:%M:%S UTC", gmtime(&t));
+ }
+ else {
+ values->expiresHeader[0] = 0;
+ }
+
+ // If-Modified-Since
+ if (params->getConditions &&
+ (params->getConditions->ifModifiedSince >= 0)) {
+ time_t t = (time_t) params->getConditions->ifModifiedSince;
+ strftime(values->ifModifiedSinceHeader,
+ sizeof(values->ifModifiedSinceHeader),
+ "If-Modified-Since: %a, %d %b %Y %H:%M:%S UTC", gmtime(&t));
+ }
+ else {
+ values->ifModifiedSinceHeader[0] = 0;
+ }
+
+ // If-Unmodified-Since header
+ if (params->getConditions &&
+ (params->getConditions->ifNotModifiedSince >= 0)) {
+ time_t t = (time_t) params->getConditions->ifNotModifiedSince;
+ strftime(values->ifUnmodifiedSinceHeader,
+ sizeof(values->ifUnmodifiedSinceHeader),
+ "If-Unmodified-Since: %a, %d %b %Y %H:%M:%S UTC", gmtime(&t));
+ }
+ else {
+ values->ifUnmodifiedSinceHeader[0] = 0;
+ }
+
+ // If-Match header
+ do_get_header("If-Match: %s", ifMatchETag, ifMatchHeader,
+ S3StatusBadIfMatchETag, S3StatusIfMatchETagTooLong);
+
+ // If-None-Match header
+ do_get_header("If-None-Match: %s", ifNotMatchETag, ifNoneMatchHeader,
+ S3StatusBadIfNotMatchETag,
+ S3StatusIfNotMatchETagTooLong);
+
+ // Range header
+ if (params->startByte || params->byteCount) {
+ if (params->byteCount) {
+ snprintf(values->rangeHeader, sizeof(values->rangeHeader),
+ "Range: bytes=%llu-%llu",
+ (unsigned long long) params->startByte,
+ (unsigned long long) (params->startByte +
+ params->byteCount - 1));
+ }
+ else {
+ snprintf(values->rangeHeader, sizeof(values->rangeHeader),
+ "Range: bytes=%llu-",
+ (unsigned long long) params->startByte);
+ }
+ }
+ else {
+ values->rangeHeader[0] = 0;
+ }
+
+ return S3StatusOK;
+}
+
+
+// URL encodes the params->key value into params->urlEncodedKey
+static S3Status encode_key(const RequestParams *params,
+ RequestComputedValues *values)
+{
+ return (urlEncode(values->urlEncodedKey, params->key, S3_MAX_KEY_SIZE) ?
+ S3StatusOK : S3StatusUriTooLong);
+}
+
+
+// Simple comparison function for comparing two HTTP header names that are
+// embedded within an HTTP header line, returning true if header1 comes
+// before header2 alphabetically, false if not
+static int headerle(const char *header1, const char *header2)
+{
+ while (1) {
+ if (*header1 == ':') {
+ return (*header2 == ':');
+ }
+ else if (*header2 == ':') {
+ return 0;
+ }
+ else if (*header2 < *header1) {
+ return 0;
+ }
+ else if (*header2 > *header1) {
+ return 1;
+ }
+ header1++, header2++;
+ }
+}
+
+
+// Replace this with merge sort eventually, it's the best stable sort. But
+// since typically the number of elements being sorted is small, it doesn't
+// matter that much which sort is used, and gnome sort is the world's simplest
+// stable sort. Added a slight twist to the standard gnome_sort - don't go
+// forward +1, go forward to the last highest index considered. This saves
+// all the string comparisons that would be done "going forward", and thus
+// only does the necessary string comparisons to move values back into their
+// sorted position.
+static void header_gnome_sort(const char **headers, int size)
+{
+ int i = 0, last_highest = 0;
+
+ while (i < size) {
+ if ((i == 0) || headerle(headers[i - 1], headers[i])) {
+ i = ++last_highest;
+ }
+ else {
+ const char *tmp = headers[i];
+ headers[i] = headers[i - 1];
+ headers[--i] = tmp;
+ }
+ }
+}
+
+
+// Canonicalizes the x-amz- headers into the canonicalizedAmzHeaders buffer
+static void canonicalize_amz_headers(RequestComputedValues *values)
+{
+ // Make a copy of the headers that will be sorted
+ const char *sortedHeaders[S3_MAX_METADATA_COUNT];
+
+ memcpy(sortedHeaders, values->amzHeaders,
+ (values->amzHeadersCount * sizeof(sortedHeaders[0])));
+
+ // Now sort these
+ header_gnome_sort(sortedHeaders, values->amzHeadersCount);
+
+ // Now copy this sorted list into the buffer, all the while:
+ // - folding repeated headers into single lines, and
+ // - folding multiple lines
+ // - removing the space after the colon
+ int lastHeaderLen = 0, i;
+ char *buffer = values->canonicalizedAmzHeaders;
+ for (i = 0; i < values->amzHeadersCount; i++) {
+ const char *header = sortedHeaders[i];
+ const char *c = header;
+ // If the header names are the same, append the next value
+ if ((i > 0) &&
+ !strncmp(header, sortedHeaders[i - 1], lastHeaderLen)) {
+ // Replacing the previous newline with a comma
+ *(buffer - 1) = ',';
+ // Skip the header name and space
+ c += (lastHeaderLen + 1);
+ }
+ // Else this is a new header
+ else {
+ // Copy in everything up to the space in the ": "
+ while (*c != ' ') {
+ *buffer++ = *c++;
+ }
+ // Save the header len since it's a new header
+ lastHeaderLen = c - header;
+ // Skip the space
+ c++;
+ }
+ // Now copy in the value, folding the lines
+ while (*c) {
+ // If c points to a \r\n[whitespace] sequence, then fold
+ // this newline out
+ if ((*c == '\r') && (*(c + 1) == '\n') && is_blank(*(c + 2))) {
+ c += 3;
+ while (is_blank(*c)) {
+ c++;
+ }
+ // Also, what has most recently been copied into buffer amy
+ // have been whitespace, and since we're folding whitespace
+ // out around this newline sequence, back buffer up over
+ // any whitespace it contains
+ while (is_blank(*(buffer - 1))) {
+ buffer--;
+ }
+ continue;
+ }
+ *buffer++ = *c++;
+ }
+ // Finally, add the newline
+ *buffer++ = '\n';
+ }
+
+ // Terminate the buffer
+ *buffer = 0;
+}
+
+
+// Canonicalizes the resource into params->canonicalizedResource
+static void canonicalize_resource(const char *bucketName,
+ const char *subResource,
+ const char *urlEncodedKey,
+ char *buffer)
+{
+ int len = 0;
+
+ *buffer = 0;
+
+#define append(str) len += sprintf(&(buffer[len]), "%s", str)
+
+ if (bucketName && bucketName[0]) {
+ buffer[len++] = '/';
+ append(bucketName);
+ }
+
+ append("/");
+
+ if (urlEncodedKey && urlEncodedKey[0]) {
+ append(urlEncodedKey);
+ }
+
+ if (subResource && subResource[0]) {
+ append("?");
+ append(subResource);
+ }
+}
+
+
+// Convert an HttpRequestType to an HTTP Verb string
+static const char *http_request_type_to_verb(HttpRequestType requestType)
+{
+ switch (requestType) {
+ case HttpRequestTypeGET:
+ return "GET";
+ case HttpRequestTypeHEAD:
+ return "HEAD";
+ case HttpRequestTypePUT:
+ case HttpRequestTypeCOPY:
+ return "PUT";
+ default: // HttpRequestTypeDELETE
+ return "DELETE";
+ }
+}
+
+
+// Composes the Authorization header for the request
+static S3Status compose_auth_header(const RequestParams *params,
+ RequestComputedValues *values)
+{
+ // We allow for:
+ // 17 bytes for HTTP-Verb + \n
+ // 129 bytes for Content-MD5 + \n
+ // 129 bytes for Content-Type + \n
+ // 1 byte for empty Date + \n
+ // CanonicalizedAmzHeaders & CanonicalizedResource
+ char signbuf[17 + 129 + 129 + 1 +
+ (sizeof(values->canonicalizedAmzHeaders) - 1) +
+ (sizeof(values->canonicalizedResource) - 1) + 1];
+ int len = 0;
+
+#define signbuf_append(format, ...) \
+ len += snprintf(&(signbuf[len]), sizeof(signbuf) - len, \
+ format, __VA_ARGS__)
+
+ signbuf_append
+ ("%s\n", http_request_type_to_verb(params->httpRequestType));
+
+ // For MD5 and Content-Type, use the value in the actual header, because
+ // it's already been trimmed
+ signbuf_append("%s\n", values->md5Header[0] ?
+ &(values->md5Header[sizeof("Content-MD5: ") - 1]) : "");
+
+ signbuf_append
+ ("%s\n", values->contentTypeHeader[0] ?
+ &(values->contentTypeHeader[sizeof("Content-Type: ") - 1]) : "");
+
+ signbuf_append("%s", "\n"); // Date - we always use x-amz-date
+
+ signbuf_append("%s", values->canonicalizedAmzHeaders);
+
+ signbuf_append("%s", values->canonicalizedResource);
+
+ // Generate an HMAC-SHA-1 of the signbuf
+ unsigned char hmac[20];
+
+ HMAC_SHA1(hmac, (unsigned char *) params->bucketContext.secretAccessKey,
+ strlen(params->bucketContext.secretAccessKey),
+ (unsigned char *) signbuf, len);
+
+ // Now base-64 encode the results
+ char b64[((20 + 1) * 4) / 3];
+ int b64Len = base64Encode(hmac, 20, b64);
+
+ snprintf(values->authorizationHeader, sizeof(values->authorizationHeader),
+ "Authorization: AWS %s:%.*s", params->bucketContext.accessKeyId,
+ b64Len, b64);
+
+ return S3StatusOK;
+}
+
+
+// Compose the URI to use for the request given the request parameters
+static S3Status compose_uri(char *buffer, int bufferSize,
+ const S3BucketContext *bucketContext,
+ const char *urlEncodedKey,
+ const char *subResource, const char *queryParams)
+{
+ int len = 0;
+
+#define uri_append(fmt, ...) \
+ do { \
+ len += snprintf(&(buffer[len]), bufferSize - len, fmt, __VA_ARGS__); \
+ if (len >= bufferSize) { \
+ return S3StatusUriTooLong; \
+ } \
+ } while (0)
+
+ uri_append("http%s://",
+ (bucketContext->protocol == S3ProtocolHTTP) ? "" : "s");
+
+ if (bucketContext->bucketName &&
+ bucketContext->bucketName[0]) {
+ if (bucketContext->uriStyle == S3UriStyleVirtualHost) {
+ uri_append("%s.s3.amazonaws.com", bucketContext->bucketName);
+ }
+ else {
+ uri_append("s3.amazonaws.com/%s", bucketContext->bucketName);
+ }
+ }
+ else {
+ uri_append("%s", "s3.amazonaws.com");
+ }
+
+ uri_append("%s", "/");
+
+ uri_append("%s", urlEncodedKey);
+
+ if (subResource && subResource[0]) {
+ uri_append("?%s", subResource);
+ }
+
+ if (queryParams) {
+ uri_append("%s%s", (subResource && subResource[0]) ? "&" : "?",
+ queryParams);
+ }
+
+ return S3StatusOK;
+}
+
+
+// Sets up the curl handle given the completely computed RequestParams
+static S3Status setup_curl(Request *request,
+ const RequestParams *params,
+ const RequestComputedValues *values)
+{
+ CURLcode status;
+
+#define curl_easy_setopt_safe(opt, val) \
+ if ((status = curl_easy_setopt \
+ (request->curl, opt, val)) != CURLE_OK) { \
+ return S3StatusFailedToInitializeRequest; \
+ }
+
+ // Debugging only
+ // curl_easy_setopt_safe(CURLOPT_VERBOSE, 1);
+
+ // Set private data to request for the benefit of S3RequestContext
+ curl_easy_setopt_safe(CURLOPT_PRIVATE, request);
+
+ // Set header callback and data
+ curl_easy_setopt_safe(CURLOPT_HEADERDATA, request);
+ curl_easy_setopt_safe(CURLOPT_HEADERFUNCTION, &curl_header_func);
+
+ // Set read callback, data, and readSize
+ curl_easy_setopt_safe(CURLOPT_READFUNCTION, &curl_read_func);
+ curl_easy_setopt_safe(CURLOPT_READDATA, request);
+
+ // Set write callback and data
+ curl_easy_setopt_safe(CURLOPT_WRITEFUNCTION, &curl_write_func);
+ curl_easy_setopt_safe(CURLOPT_WRITEDATA, request);
+
+ // Ask curl to parse the Last-Modified header. This is easier than
+ // parsing it ourselves.
+ curl_easy_setopt_safe(CURLOPT_FILETIME, 1);
+
+ // Curl docs suggest that this is necessary for multithreaded code.
+ // However, it also points out that DNS timeouts will not be honored
+ // during DNS lookup, which can be worked around by using the c-ares
+ // library, which we do not do yet.
+ curl_easy_setopt_safe(CURLOPT_NOSIGNAL, 1);
+
+ // Turn off Curl's built-in progress meter
+ curl_easy_setopt_safe(CURLOPT_NOPROGRESS, 1);
+
+ // xxx todo - support setting the proxy for Curl to use (can't use https
+ // for proxies though)
+
+ // xxx todo - support setting the network interface for Curl to use
+
+ // I think this is useful - we don't need interactive performance, we need
+ // to complete large operations quickly
+ curl_easy_setopt_safe(CURLOPT_TCP_NODELAY, 1);
+
+ // Don't use Curl's 'netrc' feature
+ curl_easy_setopt_safe(CURLOPT_NETRC, CURL_NETRC_IGNORED);
+
+ // Don't verify S3's certificate, there are known to be issues with
+ // them sometimes
+ // xxx todo - support an option for verifying the S3 CA (default false)
+ curl_easy_setopt_safe(CURLOPT_SSL_VERIFYPEER, 0);
+
+ // Follow any redirection directives that S3 sends
+ curl_easy_setopt_safe(CURLOPT_FOLLOWLOCATION, 1);
+
+ // A safety valve in case S3 goes bananas with redirects
+ curl_easy_setopt_safe(CURLOPT_MAXREDIRS, 10);
+
+ // Set the User-Agent; maybe Amazon will track these?
+ curl_easy_setopt_safe(CURLOPT_USERAGENT, userAgentG);
+
+ // Set the low speed limit and time; we abort transfers that stay at
+ // less than 1K per second for more than 15 seconds.
+ // xxx todo - make these configurable
+ // xxx todo - allow configurable max send and receive speed
+ curl_easy_setopt_safe(CURLOPT_LOW_SPEED_LIMIT, 1024);
+ curl_easy_setopt_safe(CURLOPT_LOW_SPEED_TIME, 15);
+
+ // Append standard headers
+#define append_standard_header(fieldName) \
+ if (values-> fieldName [0]) { \
+ request->headers = curl_slist_append(request->headers, \
+ values-> fieldName); \
+ }
+
+ // Would use CURLOPT_INFILESIZE_LARGE, but it is buggy in libcurl
+ if (params->httpRequestType == HttpRequestTypePUT) {
+ char header[256];
+ snprintf(header, sizeof(header), "Content-Length: %llu",
+ (unsigned long long) params->toS3CallbackTotalSize);
+ request->headers = curl_slist_append(request->headers, header);
+ request->headers = curl_slist_append(request->headers,
+ "Transfer-Encoding:");
+ }
+ else if (params->httpRequestType == HttpRequestTypeCOPY) {
+ request->headers = curl_slist_append(request->headers,
+ "Transfer-Encoding:");
+ }
+
+ append_standard_header(cacheControlHeader);
+ append_standard_header(contentTypeHeader);
+ append_standard_header(md5Header);
+ append_standard_header(contentDispositionHeader);
+ append_standard_header(contentEncodingHeader);
+ append_standard_header(expiresHeader);
+ append_standard_header(ifModifiedSinceHeader);
+ append_standard_header(ifUnmodifiedSinceHeader);
+ append_standard_header(ifMatchHeader);
+ append_standard_header(ifNoneMatchHeader);
+ append_standard_header(rangeHeader);
+ append_standard_header(authorizationHeader);
+
+ // Append x-amz- headers
+ int i;
+ for (i = 0; i < values->amzHeadersCount; i++) {
+ request->headers =
+ curl_slist_append(request->headers, values->amzHeaders[i]);
+ }
+
+ // Set the HTTP headers
+ curl_easy_setopt_safe(CURLOPT_HTTPHEADER, request->headers);
+
+ // Set URI
+ curl_easy_setopt_safe(CURLOPT_URL, request->uri);
+
+ // Set request type.
+ switch (params->httpRequestType) {
+ case HttpRequestTypeHEAD:
+ curl_easy_setopt_safe(CURLOPT_NOBODY, 1);
+ break;
+ case HttpRequestTypePUT:
+ case HttpRequestTypeCOPY:
+ curl_easy_setopt_safe(CURLOPT_UPLOAD, 1);
+ break;
+ case HttpRequestTypeDELETE:
+ curl_easy_setopt_safe(CURLOPT_CUSTOMREQUEST, "DELETE");
+ break;
+ default: // HttpRequestTypeGET
+ break;
+ }
+
+ return S3StatusOK;
+}
+
+
+static void request_deinitialize(Request *request)
+{
+ if (request->headers) {
+ curl_slist_free_all(request->headers);
+ }
+
+ error_parser_deinitialize(&(request->errorParser));
+
+ // curl_easy_reset prevents connections from being re-used for some
+ // reason. This makes HTTP Keep-Alive meaningless and is very bad for
+ // performance. But it is necessary to allow curl to work properly.
+ // xxx todo figure out why
+ curl_easy_reset(request->curl);
+}
+
+
+static S3Status request_get(const RequestParams *params,
+ const RequestComputedValues *values,
+ Request **reqReturn)
+{
+ Request *request = 0;
+
+ // Try to get one from the request stack. We hold the lock for the
+ // shortest time possible here.
+ pthread_mutex_lock(&requestStackMutexG);
+
+ if (requestStackCountG) {
+ request = requestStackG[--requestStackCountG];
+ }
+
+ pthread_mutex_unlock(&requestStackMutexG);
+
+ // If we got one, deinitialize it for re-use
+ if (request) {
+ request_deinitialize(request);
+ }
+ // Else there wasn't one available in the request stack, so create one
+ else {
+ if (!(request = (Request *) malloc(sizeof(Request)))) {
+ return S3StatusOutOfMemory;
+ }
+ if (!(request->curl = curl_easy_init())) {
+ free(request);
+ return S3StatusFailedToInitializeRequest;
+ }
+ }
+
+ // Initialize the request
+ request->prev = 0;
+ request->next = 0;
+
+ // Request status is initialized to no error, will be updated whenever
+ // an error occurs
+ request->status = S3StatusOK;
+
+ S3Status status;
+
+ // Start out with no headers
+ request->headers = 0;
+
+ // Compute the URL
+ if ((status = compose_uri
+ (request->uri, sizeof(request->uri),
+ &(params->bucketContext), values->urlEncodedKey,
+ params->subResource, params->queryParams)) != S3StatusOK) {
+ curl_easy_cleanup(request->curl);
+ free(request);
+ return status;
+ }
+
+ // Set all of the curl handle options
+ if ((status = setup_curl(request, params, values)) != S3StatusOK) {
+ curl_easy_cleanup(request->curl);
+ free(request);
+ return status;
+ }
+
+ request->propertiesCallback = params->propertiesCallback;
+
+ request->toS3Callback = params->toS3Callback;
+
+ request->toS3CallbackBytesRemaining = params->toS3CallbackTotalSize;
+
+ request->fromS3Callback = params->fromS3Callback;
+
+ request->completeCallback = params->completeCallback;
+
+ request->callbackData = params->callbackData;
+
+ response_headers_handler_initialize(&(request->responseHeadersHandler));
+
+ request->propertiesCallbackMade = 0;
+
+ error_parser_initialize(&(request->errorParser));
+
+ *reqReturn = request;
+
+ return S3StatusOK;
+}
+
+
+static void request_destroy(Request *request)
+{
+ request_deinitialize(request);
+ curl_easy_cleanup(request->curl);
+ free(request);
+}
+
+
+static void request_release(Request *request)
+{
+ pthread_mutex_lock(&requestStackMutexG);
+
+ // If the request stack is full, destroy this one
+ if (requestStackCountG == REQUEST_STACK_SIZE) {
+ pthread_mutex_unlock(&requestStackMutexG);
+ request_destroy(request);
+ }
+ // Else put this one at the front of the request stack; we do this because
+ // we want the most-recently-used curl handle to be re-used on the next
+ // request, to maximize our chances of re-using a TCP connection before it
+ // times out
+ else {
+ requestStackG[requestStackCountG++] = request;
+ pthread_mutex_unlock(&requestStackMutexG);
+ }
+}
+
+
+S3Status request_api_initialize(const char *userAgentInfo, int flags)
+{
+ if (curl_global_init(CURL_GLOBAL_ALL &
+ ~((flags & S3_INIT_WINSOCK) ? 0 : CURL_GLOBAL_WIN32))
+ != CURLE_OK) {
+ return S3StatusInternalError;
+ }
+
+ pthread_mutex_init(&requestStackMutexG, 0);
+
+ requestStackCountG = 0;
+
+ if (!userAgentInfo || !*userAgentInfo) {
+ userAgentInfo = "Unknown";
+ }
+
+ char platform[96];
+ struct utsname utsn;
+ if (uname(&utsn)) {
+ strncpy(platform, "Unknown", sizeof(platform));
+ // Because strncpy doesn't always zero terminate
+ platform[sizeof(platform) - 1] = 0;
+ }
+ else {
+ snprintf(platform, sizeof(platform), "%s%s%s", utsn.sysname,
+ utsn.machine[0] ? " " : "", utsn.machine);
+ }
+
+ snprintf(userAgentG, sizeof(userAgentG),
+ "Mozilla/4.0 (Compatible; %s; libs3 %s.%s; %s)",
+ userAgentInfo, LIBS3_VER_MAJOR, LIBS3_VER_MINOR, platform);
+
+ return S3StatusOK;
+}
+
+
+void request_api_deinitialize()
+{
+ pthread_mutex_destroy(&requestStackMutexG);
+
+ while (requestStackCountG--) {
+ request_destroy(requestStackG[requestStackCountG]);
+ }
+}
+
+
+void request_perform(const RequestParams *params, S3RequestContext *context)
+{
+ Request *request;
+ S3Status status;
+
+#define return_status(status) \
+ (*(params->completeCallback))(status, 0, params->callbackData); \
+ return
+
+ // These will hold the computed values
+ RequestComputedValues computed;
+
+ // Validate the bucket name
+ if (params->bucketContext.bucketName &&
+ ((status = S3_validate_bucket_name
+ (params->bucketContext.bucketName,
+ params->bucketContext.uriStyle)) != S3StatusOK)) {
+ return_status(status);
+ }
+
+ // Compose the amz headers
+ if ((status = compose_amz_headers(params, &computed)) != S3StatusOK) {
+ return_status(status);
+ }
+
+ // Compose standard headers
+ if ((status = compose_standard_headers
+ (params, &computed)) != S3StatusOK) {
+ return_status(status);
+ }
+
+ // URL encode the key
+ if ((status = encode_key(params, &computed)) != S3StatusOK) {
+ return_status(status);
+ }
+
+ // Compute the canonicalized amz headers
+ canonicalize_amz_headers(&computed);
+
+ // Compute the canonicalized resource
+ canonicalize_resource(params->bucketContext.bucketName,
+ params->subResource, computed.urlEncodedKey,
+ computed.canonicalizedResource);
+
+ // Compose Authorization header
+ if ((status = compose_auth_header(params, &computed)) != S3StatusOK) {
+ return_status(status);
+ }
+
+ // Get an initialized Request structure now
+ if ((status = request_get(params, &computed, &request)) != S3StatusOK) {
+ return_status(status);
+ }
+
+ // If a RequestContext was provided, add the request to the curl multi
+ if (context) {
+ CURLMcode code = curl_multi_add_handle(context->curlm, request->curl);
+ if (code == CURLM_OK) {
+ if (context->requests) {
+ request->prev = context->requests->prev;
+ request->next = context->requests;
+ context->requests->prev->next = request;
+ context->requests->prev = request;
+ }
+ else {
+ context->requests = request->next = request->prev = request;
+ }
+ }
+ else {
+ if (request->status == S3StatusOK) {
+ request->status = (code == CURLM_OUT_OF_MEMORY) ?
+ S3StatusOutOfMemory : S3StatusInternalError;
+ }
+ request_finish(request);
+ }
+ }
+ // Else, perform the request immediately
+ else {
+ CURLcode code = curl_easy_perform(request->curl);
+ if ((code != CURLE_OK) && (request->status == S3StatusOK)) {
+ request->status = request_curl_code_to_status(code);
+ }
+ // Finish the request, ensuring that all callbacks have been made, and
+ // also releases the request
+ request_finish(request);
+ }
+}
+
+
+void request_finish(Request *request)
+{
+ // If we haven't detected this already, we now know that the headers are
+ // definitely done being read in
+ request_headers_done(request);
+
+ // If there was no error processing the request, then possibly there was
+ // an S3 error parsed, which should be converted into the request status
+ if (request->status == S3StatusOK) {
+ error_parser_convert_status(&(request->errorParser),
+ &(request->status));
+ // If there still was no error recorded, then it is possible that
+ // there was in fact an error but that there was no error XML
+ // detailing the error
+ if ((request->status == S3StatusOK) &&
+ ((request->httpResponseCode < 200) ||
+ (request->httpResponseCode > 299))) {
+ switch (request->httpResponseCode) {
+ case 0:
+ // This happens if the request never got any HTTP response
+ // headers at all, we call this a ConnectionFailed error
+ request->status = S3StatusConnectionFailed;
+ break;
+ case 100: // Some versions of libcurl erroneously set HTTP
+ // status to this
+ break;
+ case 301:
+ request->status = S3StatusErrorPermanentRedirect;
+ break;
+ case 307:
+ request->status = S3StatusHttpErrorMovedTemporarily;
+ break;
+ case 400:
+ request->status = S3StatusHttpErrorBadRequest;
+ break;
+ case 403:
+ request->status = S3StatusHttpErrorForbidden;
+ break;
+ case 404:
+ request->status = S3StatusHttpErrorNotFound;
+ break;
+ case 405:
+ request->status = S3StatusErrorMethodNotAllowed;
+ break;
+ case 409:
+ request->status = S3StatusHttpErrorConflict;
+ break;
+ case 411:
+ request->status = S3StatusErrorMissingContentLength;
+ break;
+ case 412:
+ request->status = S3StatusErrorPreconditionFailed;
+ break;
+ case 416:
+ request->status = S3StatusErrorInvalidRange;
+ break;
+ case 500:
+ request->status = S3StatusErrorInternalError;
+ break;
+ case 501:
+ request->status = S3StatusErrorNotImplemented;
+ break;
+ case 503:
+ request->status = S3StatusErrorSlowDown;
+ break;
+ default:
+ request->status = S3StatusHttpErrorUnknown;
+ break;
+ }
+ }
+ }
+
+ (*(request->completeCallback))
+ (request->status, &(request->errorParser.s3ErrorDetails),
+ request->callbackData);
+
+ request_release(request);
+}
+
+
+S3Status request_curl_code_to_status(CURLcode code)
+{
+ switch (code) {
+ case CURLE_OUT_OF_MEMORY:
+ return S3StatusOutOfMemory;
+ case CURLE_COULDNT_RESOLVE_PROXY:
+ case CURLE_COULDNT_RESOLVE_HOST:
+ return S3StatusNameLookupError;
+ case CURLE_COULDNT_CONNECT:
+ return S3StatusFailedToConnect;
+ case CURLE_WRITE_ERROR:
+ case CURLE_OPERATION_TIMEDOUT:
+ return S3StatusConnectionFailed;
+ case CURLE_PARTIAL_FILE:
+ return S3StatusOK;
+ case CURLE_SSL_CACERT:
+ return S3StatusServerFailedVerification;
+ default:
+ return S3StatusInternalError;
+ }
+}
+
+
+S3Status S3_generate_authenticated_query_string
+ (char *buffer, const S3BucketContext *bucketContext,
+ const char *key, int64_t expires, const char *resource)
+{
+#define MAX_EXPIRES (((int64_t) 1 << 31) - 1)
+ // S3 seems to only accept expiration dates up to the number of seconds
+ // representably by a signed 32-bit integer
+ if (expires < 0) {
+ expires = MAX_EXPIRES;
+ }
+ else if (expires > MAX_EXPIRES) {
+ expires = MAX_EXPIRES;
+ }
+
+ // xxx todo: rework this so that it can be incorporated into shared code
+ // with request_perform(). It's really unfortunate that this code is not
+ // shared with request_perform().
+
+ // URL encode the key
+ char urlEncodedKey[S3_MAX_KEY_SIZE * 3];
+ if (key) {
+ urlEncode(urlEncodedKey, key, strlen(key));
+ }
+ else {
+ urlEncodedKey[0] = 0;
+ }
+
+ // Compute canonicalized resource
+ char canonicalizedResource[MAX_CANONICALIZED_RESOURCE_SIZE];
+ canonicalize_resource(bucketContext->bucketName, resource, urlEncodedKey,
+ canonicalizedResource);
+
+ // We allow for:
+ // 17 bytes for HTTP-Verb + \n
+ // 1 byte for empty Content-MD5 + \n
+ // 1 byte for empty Content-Type + \n
+ // 20 bytes for Expires + \n
+ // 0 bytes for CanonicalizedAmzHeaders
+ // CanonicalizedResource
+ char signbuf[17 + 1 + 1 + 1 + 20 + sizeof(canonicalizedResource) + 1];
+ int len = 0;
+
+#define signbuf_append(format, ...) \
+ len += snprintf(&(signbuf[len]), sizeof(signbuf) - len, \
+ format, __VA_ARGS__)
+
+ signbuf_append("%s\n", "GET"); // HTTP-Verb
+ signbuf_append("%s\n", ""); // Content-MD5
+ signbuf_append("%s\n", ""); // Content-Type
+ signbuf_append("%llu\n", (unsigned long long) expires);
+ signbuf_append("%s", canonicalizedResource);
+
+ // Generate an HMAC-SHA-1 of the signbuf
+ unsigned char hmac[20];
+
+ HMAC_SHA1(hmac, (unsigned char *) bucketContext->secretAccessKey,
+ strlen(bucketContext->secretAccessKey),
+ (unsigned char *) signbuf, len);
+
+ // Now base-64 encode the results
+ char b64[((20 + 1) * 4) / 3];
+ int b64Len = base64Encode(hmac, 20, b64);
+
+ // Now urlEncode that
+ char signature[sizeof(b64) * 3];
+ urlEncode(signature, b64, b64Len);
+
+ // Finally, compose the uri, with params:
+ // ?AWSAccessKeyId=xxx[&Expires=]&Signature=xxx
+ char queryParams[sizeof("AWSAccessKeyId=") + 20 +
+ sizeof("&Expires=") + 20 +
+ sizeof("&Signature=") + sizeof(signature) + 1];
+
+ sprintf(queryParams, "AWSAccessKeyId=%s&Expires=%ld&Signature=%s",
+ bucketContext->accessKeyId, (long) expires, signature);
+
+ return compose_uri(buffer, S3_MAX_AUTHENTICATED_QUERY_STRING_SIZE,
+ bucketContext, urlEncodedKey, resource, queryParams);
+}
--- /dev/null
+/** **************************************************************************
+ * request_context.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#include <curl/curl.h>
+#include <stdlib.h>
+#include <sys/select.h>
+#include "request.h"
+#include "request_context.h"
+
+
+S3Status S3_create_request_context(S3RequestContext **requestContextReturn)
+{
+ *requestContextReturn =
+ (S3RequestContext *) malloc(sizeof(S3RequestContext));
+
+ if (!*requestContextReturn) {
+ return S3StatusOutOfMemory;
+ }
+
+ if (!((*requestContextReturn)->curlm = curl_multi_init())) {
+ free(*requestContextReturn);
+ return S3StatusOutOfMemory;
+ }
+
+ (*requestContextReturn)->requests = 0;
+
+ return S3StatusOK;
+}
+
+
+void S3_destroy_request_context(S3RequestContext *requestContext)
+{
+ curl_multi_cleanup(requestContext->curlm);
+
+ // For each request in the context, call back its done method with
+ // 'interrupted' status
+ Request *r = requestContext->requests, *rFirst = r;
+
+ if (r) do {
+ r->status = S3StatusInterrupted;
+ Request *rNext = r->next;
+ request_finish(r);
+ r = rNext;
+ } while (r != rFirst);
+
+ free(requestContext);
+}
+
+
+S3Status S3_runall_request_context(S3RequestContext *requestContext)
+{
+ int requestsRemaining;
+ do {
+ fd_set readfds, writefds, exceptfds;
+ FD_ZERO(&readfds);
+ FD_ZERO(&writefds);
+ FD_ZERO(&exceptfds);
+ int maxfd;
+ S3Status status = S3_get_request_context_fdsets
+ (requestContext, &readfds, &writefds, &exceptfds, &maxfd);
+ if (status != S3StatusOK) {
+ return status;
+ }
+ // curl will return -1 if it hasn't even created any fds yet because
+ // none of the connections have started yet. In this case, don't
+ // do the select at all, because it will wait forever; instead, just
+ // skip it and go straight to running the underlying CURL handles
+ if (maxfd != -1) {
+ int64_t timeout = S3_get_request_context_timeout(requestContext);
+ struct timeval tv = { timeout / 1000, (timeout % 1000) * 1000 };
+ select(maxfd + 1, &readfds, &writefds, &exceptfds,
+ (timeout == -1) ? 0 : &tv);
+ }
+ status = S3_runonce_request_context(requestContext,
+ &requestsRemaining);
+ if (status != S3StatusOK) {
+ return status;
+ }
+ } while (requestsRemaining);
+
+ return S3StatusOK;
+}
+
+
+S3Status S3_runonce_request_context(S3RequestContext *requestContext,
+ int *requestsRemainingReturn)
+{
+ CURLMcode status;
+
+ do {
+ status = curl_multi_perform(requestContext->curlm,
+ requestsRemainingReturn);
+
+ switch (status) {
+ case CURLM_OK:
+ case CURLM_CALL_MULTI_PERFORM:
+ break;
+ case CURLM_OUT_OF_MEMORY:
+ return S3StatusOutOfMemory;
+ default:
+ return S3StatusInternalError;
+ }
+
+ CURLMsg *msg;
+ int junk;
+ while ((msg = curl_multi_info_read(requestContext->curlm, &junk))) {
+ if (msg->msg != CURLMSG_DONE) {
+ return S3StatusInternalError;
+ }
+ Request *request;
+ if (curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE,
+ (char **) (char *) &request) != CURLE_OK) {
+ return S3StatusInternalError;
+ }
+ // Remove the request from the list of requests
+ if (request->prev == request->next) {
+ // It was the only one on the list
+ requestContext->requests = 0;
+ }
+ else {
+ // It doesn't matter what the order of them are, so just in
+ // case request was at the head of the list, put the one after
+ // request to the head of the list
+ requestContext->requests = request->next;
+ request->prev->next = request->next;
+ request->next->prev = request->prev;
+ }
+ if ((msg->data.result != CURLE_OK) &&
+ (request->status == S3StatusOK)) {
+ request->status = request_curl_code_to_status
+ (msg->data.result);
+ }
+ if (curl_multi_remove_handle(requestContext->curlm,
+ msg->easy_handle) != CURLM_OK) {
+ return S3StatusInternalError;
+ }
+ // Finish the request, ensuring that all callbacks have been made,
+ // and also releases the request
+ request_finish(request);
+ // Now, since a callback was made, there may be new requests
+ // queued up to be performed immediately, so do so
+ status = CURLM_CALL_MULTI_PERFORM;
+ }
+ } while (status == CURLM_CALL_MULTI_PERFORM);
+
+ return S3StatusOK;
+}
+
+S3Status S3_get_request_context_fdsets(S3RequestContext *requestContext,
+ fd_set *readFdSet, fd_set *writeFdSet,
+ fd_set *exceptFdSet, int *maxFd)
+{
+ return ((curl_multi_fdset(requestContext->curlm, readFdSet, writeFdSet,
+ exceptFdSet, maxFd) == CURLM_OK) ?
+ S3StatusOK : S3StatusInternalError);
+}
+
+int64_t S3_get_request_context_timeout(S3RequestContext *requestContext)
+{
+ long timeout;
+
+ if (curl_multi_timeout(requestContext->curlm, &timeout) != CURLM_OK) {
+ timeout = 0;
+ }
+
+ return timeout;
+}
--- /dev/null
+/** **************************************************************************
+ * response_headers_handler.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#include <ctype.h>
+#include <string.h>
+#include "response_headers_handler.h"
+
+
+void response_headers_handler_initialize(ResponseHeadersHandler *handler)
+{
+ handler->responseProperties.requestId = 0;
+ handler->responseProperties.requestId2 = 0;
+ handler->responseProperties.contentType = 0;
+ handler->responseProperties.contentLength = 0;
+ handler->responseProperties.server = 0;
+ handler->responseProperties.eTag = 0;
+ handler->responseProperties.lastModified = -1;
+ handler->responseProperties.metaDataCount = 0;
+ handler->responseProperties.metaData = 0;
+ handler->done = 0;
+ string_multibuffer_initialize(handler->responsePropertyStrings);
+ string_multibuffer_initialize(handler->responseMetaDataStrings);
+}
+
+
+void response_headers_handler_add(ResponseHeadersHandler *handler,
+ char *header, int len)
+{
+ S3ResponseProperties *responseProperties = &(handler->responseProperties);
+ char *end = &(header[len]);
+
+ // Curl might call back the header function after the body has been
+ // received, for 'chunked encoded' contents. We don't handle this as of
+ // yet, and it's not clear that it would ever be useful.
+ if (handler->done) {
+ return;
+ }
+
+ // If we've already filled up the response headers, ignore this data.
+ // This sucks, but it shouldn't happen - S3 should not be sending back
+ // really long headers.
+ if (handler->responsePropertyStringsSize ==
+ (sizeof(handler->responsePropertyStrings) - 1)) {
+ return;
+ }
+
+ // It should not be possible to have a header line less than 3 long
+ if (len < 3) {
+ return;
+ }
+
+ // Skip whitespace at beginning of header; there never should be any,
+ // but just to be safe
+ while (is_blank(*header)) {
+ header++;
+ }
+
+ // The header must end in \r\n, so skip back over it, and also over any
+ // trailing whitespace
+ end -= 3;
+ while ((end > header) && is_blank(*end)) {
+ end--;
+ }
+ if (!is_blank(*end)) {
+ end++;
+ }
+
+ if (end == header) {
+ // totally bogus
+ return;
+ }
+
+ *end = 0;
+
+ // Find the colon to split the header up
+ char *c = header;
+ while (*c && (*c != ':')) {
+ c++;
+ }
+
+ int namelen = c - header;
+
+ // Now walk c past the colon
+ c++;
+ // Now skip whitespace to the beginning of the value
+ while (is_blank(*c)) {
+ c++;
+ }
+
+ int valuelen = (end - c) + 1, fit;
+
+ if (!strncmp(header, "x-amz-request-id", namelen)) {
+ responseProperties->requestId =
+ string_multibuffer_current(handler->responsePropertyStrings);
+ string_multibuffer_add(handler->responsePropertyStrings, c,
+ valuelen, fit);
+ }
+ else if (!strncmp(header, "x-amz-id-2", namelen)) {
+ responseProperties->requestId2 =
+ string_multibuffer_current(handler->responsePropertyStrings);
+ string_multibuffer_add(handler->responsePropertyStrings, c,
+ valuelen, fit);
+ }
+ else if (!strncmp(header, "Content-Type", namelen)) {
+ responseProperties->contentType =
+ string_multibuffer_current(handler->responsePropertyStrings);
+ string_multibuffer_add(handler->responsePropertyStrings, c,
+ valuelen, fit);
+ }
+ else if (!strncmp(header, "Content-Length", namelen)) {
+ handler->responseProperties.contentLength = 0;
+ while (*c) {
+ handler->responseProperties.contentLength *= 10;
+ handler->responseProperties.contentLength += (*c++ - '0');
+ }
+ }
+ else if (!strncmp(header, "Server", namelen)) {
+ responseProperties->server =
+ string_multibuffer_current(handler->responsePropertyStrings);
+ string_multibuffer_add(handler->responsePropertyStrings, c,
+ valuelen, fit);
+ }
+ else if (!strncmp(header, "ETag", namelen)) {
+ responseProperties->eTag =
+ string_multibuffer_current(handler->responsePropertyStrings);
+ string_multibuffer_add(handler->responsePropertyStrings, c,
+ valuelen, fit);
+ }
+ else if (!strncmp(header, S3_METADATA_HEADER_NAME_PREFIX,
+ sizeof(S3_METADATA_HEADER_NAME_PREFIX) - 1)) {
+ // Make sure there is room for another x-amz-meta header
+ if (handler->responseProperties.metaDataCount ==
+ sizeof(handler->responseMetaData)) {
+ return;
+ }
+ // Copy the name in
+ char *metaName = &(header[sizeof(S3_METADATA_HEADER_NAME_PREFIX) - 1]);
+ int metaNameLen =
+ (namelen - (sizeof(S3_METADATA_HEADER_NAME_PREFIX) - 1));
+ char *copiedName =
+ string_multibuffer_current(handler->responseMetaDataStrings);
+ string_multibuffer_add(handler->responseMetaDataStrings, metaName,
+ metaNameLen, fit);
+ if (!fit) {
+ return;
+ }
+
+ // Copy the value in
+ char *copiedValue =
+ string_multibuffer_current(handler->responseMetaDataStrings);
+ string_multibuffer_add(handler->responseMetaDataStrings,
+ c, valuelen, fit);
+ if (!fit) {
+ return;
+ }
+
+ if (!handler->responseProperties.metaDataCount) {
+ handler->responseProperties.metaData =
+ handler->responseMetaData;
+ }
+
+ S3NameValue *metaHeader =
+ &(handler->responseMetaData
+ [handler->responseProperties.metaDataCount++]);
+ metaHeader->name = copiedName;
+ metaHeader->value = copiedValue;
+ }
+}
+
+
+void response_headers_handler_done(ResponseHeadersHandler *handler, CURL *curl)
+{
+ // Now get the last modification time from curl, since it's easiest to let
+ // curl parse it
+ time_t lastModified;
+ if (curl_easy_getinfo
+ (curl, CURLINFO_FILETIME, &lastModified) == CURLE_OK) {
+ handler->responseProperties.lastModified = lastModified;
+ }
+
+ handler->done = 1;
+}
--- /dev/null
+/** **************************************************************************
+ * s3.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+/**
+ * This is a 'driver' program that simply converts command-line input into
+ * calls to libs3 functions, and prints the results.
+ **/
+
+#include <ctype.h>
+#include <getopt.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <strings.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+#include <time.h>
+#include <unistd.h>
+#include "libs3.h"
+
+// Some Windows stuff
+#ifndef FOPEN_EXTRA_FLAGS
+#define FOPEN_EXTRA_FLAGS ""
+#endif
+
+// Also needed for Windows, because somehow MinGW doesn't define this
+extern int putenv(char *);
+
+
+// Command-line options, saved as globals ------------------------------------
+
+static int forceG = 0;
+static int showResponsePropertiesG = 0;
+static S3Protocol protocolG = S3ProtocolHTTPS;
+static S3UriStyle uriStyleG = S3UriStylePath;
+static int retriesG = 5;
+
+
+// Environment variables, saved as globals ----------------------------------
+
+static const char *accessKeyIdG = 0;
+static const char *secretAccessKeyG = 0;
+
+
+// Request results, saved as globals -----------------------------------------
+
+static int statusG = 0;
+static char errorDetailsG[4096] = { 0 };
+
+
+// Other globals -------------------------------------------------------------
+
+static char putenvBufG[256];
+
+
+// Option prefixes -----------------------------------------------------------
+
+#define LOCATION_PREFIX "location="
+#define LOCATION_PREFIX_LEN (sizeof(LOCATION_PREFIX) - 1)
+#define CANNED_ACL_PREFIX "cannedAcl="
+#define CANNED_ACL_PREFIX_LEN (sizeof(CANNED_ACL_PREFIX) - 1)
+#define PREFIX_PREFIX "prefix="
+#define PREFIX_PREFIX_LEN (sizeof(PREFIX_PREFIX) - 1)
+#define MARKER_PREFIX "marker="
+#define MARKER_PREFIX_LEN (sizeof(MARKER_PREFIX) - 1)
+#define DELIMITER_PREFIX "delimiter="
+#define DELIMITER_PREFIX_LEN (sizeof(DELIMITER_PREFIX) - 1)
+#define MAXKEYS_PREFIX "maxkeys="
+#define MAXKEYS_PREFIX_LEN (sizeof(MAXKEYS_PREFIX) - 1)
+#define FILENAME_PREFIX "filename="
+#define FILENAME_PREFIX_LEN (sizeof(FILENAME_PREFIX) - 1)
+#define CONTENT_LENGTH_PREFIX "contentLength="
+#define CONTENT_LENGTH_PREFIX_LEN (sizeof(CONTENT_LENGTH_PREFIX) - 1)
+#define CACHE_CONTROL_PREFIX "cacheControl="
+#define CACHE_CONTROL_PREFIX_LEN (sizeof(CACHE_CONTROL_PREFIX) - 1)
+#define CONTENT_TYPE_PREFIX "contentType="
+#define CONTENT_TYPE_PREFIX_LEN (sizeof(CONTENT_TYPE_PREFIX) - 1)
+#define MD5_PREFIX "md5="
+#define MD5_PREFIX_LEN (sizeof(MD5_PREFIX) - 1)
+#define CONTENT_DISPOSITION_FILENAME_PREFIX "contentDispositionFilename="
+#define CONTENT_DISPOSITION_FILENAME_PREFIX_LEN \
+ (sizeof(CONTENT_DISPOSITION_FILENAME_PREFIX) - 1)
+#define CONTENT_ENCODING_PREFIX "contentEncoding="
+#define CONTENT_ENCODING_PREFIX_LEN (sizeof(CONTENT_ENCODING_PREFIX) - 1)
+#define EXPIRES_PREFIX "expires="
+#define EXPIRES_PREFIX_LEN (sizeof(EXPIRES_PREFIX) - 1)
+#define X_AMZ_META_PREFIX "x-amz-meta-"
+#define X_AMZ_META_PREFIX_LEN (sizeof(X_AMZ_META_PREFIX) - 1)
+#define IF_MODIFIED_SINCE_PREFIX "ifModifiedSince="
+#define IF_MODIFIED_SINCE_PREFIX_LEN (sizeof(IF_MODIFIED_SINCE_PREFIX) - 1)
+#define IF_NOT_MODIFIED_SINCE_PREFIX "ifNotmodifiedSince="
+#define IF_NOT_MODIFIED_SINCE_PREFIX_LEN \
+ (sizeof(IF_NOT_MODIFIED_SINCE_PREFIX) - 1)
+#define IF_MATCH_PREFIX "ifMatch="
+#define IF_MATCH_PREFIX_LEN (sizeof(IF_MATCH_PREFIX) - 1)
+#define IF_NOT_MATCH_PREFIX "ifNotMatch="
+#define IF_NOT_MATCH_PREFIX_LEN (sizeof(IF_NOT_MATCH_PREFIX) - 1)
+#define START_BYTE_PREFIX "startByte="
+#define START_BYTE_PREFIX_LEN (sizeof(START_BYTE_PREFIX) - 1)
+#define BYTE_COUNT_PREFIX "byteCount="
+#define BYTE_COUNT_PREFIX_LEN (sizeof(BYTE_COUNT_PREFIX) - 1)
+#define ALL_DETAILS_PREFIX "allDetails="
+#define ALL_DETAILS_PREFIX_LEN (sizeof(ALL_DETAILS_PREFIX) - 1)
+#define NO_STATUS_PREFIX "noStatus="
+#define NO_STATUS_PREFIX_LEN (sizeof(NO_STATUS_PREFIX) - 1)
+#define RESOURCE_PREFIX "resource="
+#define RESOURCE_PREFIX_LEN (sizeof(RESOURCE_PREFIX) - 1)
+#define TARGET_BUCKET_PREFIX "targetBucket="
+#define TARGET_BUCKET_PREFIX_LEN (sizeof(TARGET_BUCKET_PREFIX) - 1)
+#define TARGET_PREFIX_PREFIX "targetPrefix="
+#define TARGET_PREFIX_PREFIX_LEN (sizeof(TARGET_PREFIX_PREFIX) - 1)
+
+
+// util ----------------------------------------------------------------------
+
+static void S3_init()
+{
+ S3Status status;
+ if ((status = S3_initialize("s3", S3_INIT_ALL))
+ != S3StatusOK) {
+ fprintf(stderr, "Failed to initialize libs3: %s\n",
+ S3_get_status_name(status));
+ exit(-1);
+ }
+}
+
+
+static void printError()
+{
+ if (statusG < S3StatusErrorAccessDenied) {
+ fprintf(stderr, "\nERROR: %s\n", S3_get_status_name(statusG));
+ }
+ else {
+ fprintf(stderr, "\nERROR: %s\n", S3_get_status_name(statusG));
+ fprintf(stderr, "%s\n", errorDetailsG);
+ }
+}
+
+
+static void usageExit(FILE *out)
+{
+ fprintf(out,
+"\n Options:\n"
+"\n"
+" Command Line:\n"
+"\n"
+" -f/--force : force operation despite warnings\n"
+" -h/--vhost-style : use virtual-host-style URIs (default is "
+ "path-style)\n"
+" -u/--unencrypted : unencrypted (use HTTP instead of HTTPS)\n"
+" -s/--show-properties : show response properties on stdout\n"
+" -r/--retries : retry retryable failures this number of times\n"
+" (default is 5)\n"
+"\n"
+" Environment:\n"
+"\n"
+" S3_ACCESS_KEY_ID : S3 access key ID (required)\n"
+" S3_SECRET_ACCESS_KEY : S3 secret access key (required)\n"
+"\n"
+" Commands (with <required parameters> and [optional parameters]) :\n"
+"\n"
+" (NOTE: all command parameters take a value and are specified using the\n"
+" pattern parameter=value)\n"
+"\n"
+" help : Prints this help text\n"
+"\n"
+" list : Lists owned buckets\n"
+" [allDetails] : Show full details\n"
+"\n"
+" test : Tests a bucket for existence and accessibility\n"
+" <bucket> : Bucket to test\n"
+"\n"
+" create : Create a new bucket\n"
+" <bucket> : Bucket to create\n"
+" [cannedAcl] : Canned ACL for the bucket (see Canned ACLs)\n"
+" [location] : Location for bucket (for example, EU)\n"
+"\n"
+" delete : Delete a bucket or key\n"
+" <bucket>[/<key>] : Bucket or bucket/key to delete\n"
+"\n"
+" list : List bucket contents\n"
+" <bucket> : Bucket to list\n"
+" [prefix] : Prefix for results set\n"
+" [marker] : Where in results set to start listing\n"
+" [delimiter] : Delimiter for rolling up results set\n"
+" [maxkeys] : Maximum number of keys to return in results set\n"
+" [allDetails] : Show full details for each key\n"
+"\n"
+" getacl : Get the ACL of a bucket or key\n"
+" <bucket>[/<key>] : Bucket or bucket/key to get the ACL of\n"
+" [filename] : Output filename for ACL (default is stdout)\n"
+"\n"
+" setacl : Set the ACL of a bucket or key\n"
+" <bucket>[/<key>] : Bucket or bucket/key to set the ACL of\n"
+" [filename] : Input filename for ACL (default is stdin)\n"
+"\n"
+" getlogging : Get the logging status of a bucket\n"
+" <bucket> : Bucket to get the logging status of\n"
+" [filename] : Output filename for ACL (default is stdout)\n"
+"\n"
+" setlogging : Set the logging status of a bucket\n"
+" <bucket> : Bucket to set the logging status of\n"
+" [targetBucket] : Target bucket to log to; if not present, disables\n"
+" logging\n"
+" [targetPrefix] : Key prefix to use for logs\n"
+" [filename] : Input filename for ACL (default is stdin)\n"
+"\n"
+" put : Puts an object\n"
+" <bucket>/<key> : Bucket/key to put object to\n"
+" [filename] : Filename to read source data from "
+ "(default is stdin)\n"
+" [contentLength] : How many bytes of source data to put (required if\n"
+" source file is stdin)\n"
+" [cacheControl] : Cache-Control HTTP header string to associate with\n"
+" object\n"
+" [contentType] : Content-Type HTTP header string to associate with\n"
+" object\n"
+" [md5] : MD5 for validating source data\n"
+" [contentDispositionFilename] : Content-Disposition filename string to\n"
+" associate with object\n"
+" [contentEncoding] : Content-Encoding HTTP header string to associate\n"
+" with object\n"
+" [expires] : Expiration date to associate with object\n"
+" [cannedAcl] : Canned ACL for the object (see Canned ACLs)\n"
+" [x-amz-meta-...]] : Metadata headers to associate with the object\n"
+"\n"
+" copy : Copies an object; if any options are set, the "
+ "entire\n"
+" metadata of the object is replaced\n"
+" <sourcebucket>/<sourcekey> : Source bucket/key\n"
+" <destbucket>/<destkey> : Destination bucket/key\n"
+" [cacheControl] : Cache-Control HTTP header string to associate with\n"
+" object\n"
+" [contentType] : Content-Type HTTP header string to associate with\n"
+" object\n"
+" [contentDispositionFilename] : Content-Disposition filename string to\n"
+" associate with object\n"
+" [contentEncoding] : Content-Encoding HTTP header string to associate\n"
+" with object\n"
+" [expires] : Expiration date to associate with object\n"
+" [cannedAcl] : Canned ACL for the object (see Canned ACLs)\n"
+" [x-amz-meta-...]] : Metadata headers to associate with the object\n"
+"\n"
+" get : Gets an object\n"
+" <buckey>/<key> : Bucket/key of object to get\n"
+" [filename] : Filename to write object data to (required if -s\n"
+" command line parameter was used)\n"
+" [ifModifiedSince] : Only return the object if it has been modified "
+ "since\n"
+" this date\n"
+" [ifNotmodifiedSince] : Only return the object if it has not been "
+ "modified\n"
+" since this date\n"
+" [ifMatch] : Only return the object if its ETag header matches\n"
+" this string\n"
+" [ifNotMatch] : Only return the object if its ETag header does "
+ "not\n"
+" match this string\n"
+" [startByte] : First byte of byte range to return\n"
+" [byteCount] : Number of bytes of byte range to return\n"
+"\n"
+" head : Gets only the headers of an object, implies -s\n"
+" <bucket>/<key> : Bucket/key of object to get headers of\n"
+"\n"
+" gqs : Generates an authenticated query string\n"
+" <bucket>[/<key>] : Bucket or bucket/key to generate query string for\n"
+" [expires] : Expiration date for query string\n"
+" [resource] : Sub-resource of key for query string, without a\n"
+" leading '?', for example, \"torrent\"\n"
+"\n"
+" Canned ACLs:\n"
+"\n"
+" The following canned ACLs are supported:\n"
+" private (default), public-read, public-read-write, authenticated-read\n"
+"\n"
+" ACL Format:\n"
+"\n"
+" For the getacl and setacl commands, the format of the ACL list is:\n"
+" 1) An initial line giving the owner id in this format:\n"
+" OwnerID <Owner ID> <Owner Display Name>\n"
+" 2) Optional header lines, giving column headers, starting with the\n"
+" word \"Type\", or with some number of dashes\n"
+" 3) Grant lines, of the form:\n"
+" <Grant Type> (whitespace) <Grantee> (whitespace) <Permission>\n"
+" where Grant Type is one of: Email, UserID, or Group, and\n"
+" Grantee is the identification of the grantee based on this type,\n"
+" and Permission is one of: READ, WRITE, READ_ACP, or FULL_CONTROL.\n"
+"\n"
+" Note that the easiest way to modify an ACL is to first get it, saving it\n"
+" into a file, then modifying the file, and then setting the modified file\n"
+" back as the new ACL for the bucket/object.\n"
+"\n"
+" Date Format:\n"
+"\n"
+" The format for dates used in parameters is as ISO 8601 dates, i.e.\n"
+" YYYY-MM-DDTHH:MM:SS[.s...][T/+-dd:dd]. Examples:\n"
+" 2008-07-29T20:36:14.0023T\n"
+" 2008-07-29T20:36:14.0023+06:00\n"
+" 2008-07-29T20:36:14.0023-10:00\n"
+"\n");
+
+ exit(-1);
+}
+
+
+static uint64_t convertInt(const char *str, const char *paramName)
+{
+ uint64_t ret = 0;
+
+ while (*str) {
+ if (!isdigit(*str)) {
+ fprintf(stderr, "\nERROR: Nondigit in %s parameter: %c\n",
+ paramName, *str);
+ usageExit(stderr);
+ }
+ ret *= 10;
+ ret += (*str++ - '0');
+ }
+
+ return ret;
+}
+
+
+typedef struct growbuffer
+{
+ // The total number of bytes, and the start byte
+ int size;
+ // The start byte
+ int start;
+ // The blocks
+ char data[64 * 1024];
+ struct growbuffer *prev, *next;
+} growbuffer;
+
+
+// returns nonzero on success, zero on out of memory
+static int growbuffer_append(growbuffer **gb, const char *data, int dataLen)
+{
+ while (dataLen) {
+ growbuffer *buf = *gb ? (*gb)->prev : 0;
+ if (!buf || (buf->size == sizeof(buf->data))) {
+ buf = (growbuffer *) malloc(sizeof(growbuffer));
+ if (!buf) {
+ return 0;
+ }
+ buf->size = 0;
+ buf->start = 0;
+ if (*gb) {
+ buf->prev = (*gb)->prev;
+ buf->next = *gb;
+ (*gb)->prev->next = buf;
+ (*gb)->prev = buf;
+ }
+ else {
+ buf->prev = buf->next = buf;
+ *gb = buf;
+ }
+ }
+
+ int toCopy = (sizeof(buf->data) - buf->size);
+ if (toCopy > dataLen) {
+ toCopy = dataLen;
+ }
+
+ memcpy(&(buf->data[buf->size]), data, toCopy);
+
+ buf->size += toCopy, data += toCopy, dataLen -= toCopy;
+ }
+
+ return 1;
+}
+
+
+static void growbuffer_read(growbuffer **gb, int amt, int *amtReturn,
+ char *buffer)
+{
+ *amtReturn = 0;
+
+ growbuffer *buf = *gb;
+
+ if (!buf) {
+ return;
+ }
+
+ *amtReturn = (buf->size > amt) ? amt : buf->size;
+
+ memcpy(buffer, &(buf->data[buf->start]), *amtReturn);
+
+ buf->start += *amtReturn, buf->size -= *amtReturn;
+
+ if (buf->size == 0) {
+ if (buf->next == buf) {
+ *gb = 0;
+ }
+ else {
+ *gb = buf->next;
+ }
+ free(buf);
+ }
+}
+
+
+static void growbuffer_destroy(growbuffer *gb)
+{
+ growbuffer *start = gb;
+
+ while (gb) {
+ growbuffer *next = gb->next;
+ free(gb);
+ gb = (next == start) ? 0 : next;
+ }
+}
+
+
+// Convenience utility for making the code look nicer. Tests a string
+// against a format; only the characters specified in the format are
+// checked (i.e. if the string is longer than the format, the string still
+// checks out ok). Format characters are:
+// d - is a digit
+// anything else - is that character
+// Returns nonzero the string checks out, zero if it does not.
+static int checkString(const char *str, const char *format)
+{
+ while (*format) {
+ if (*format == 'd') {
+ if (!isdigit(*str)) {
+ return 0;
+ }
+ }
+ else if (*str != *format) {
+ return 0;
+ }
+ str++, format++;
+ }
+
+ return 1;
+}
+
+
+static int64_t parseIso8601Time(const char *str)
+{
+ // Check to make sure that it has a valid format
+ if (!checkString(str, "dddd-dd-ddTdd:dd:dd")) {
+ return -1;
+ }
+
+#define nextnum() (((*str - '0') * 10) + (*(str + 1) - '0'))
+
+ // Convert it
+ struct tm stm;
+ memset(&stm, 0, sizeof(stm));
+
+ stm.tm_year = (nextnum() - 19) * 100;
+ str += 2;
+ stm.tm_year += nextnum();
+ str += 3;
+
+ stm.tm_mon = nextnum() - 1;
+ str += 3;
+
+ stm.tm_mday = nextnum();
+ str += 3;
+
+ stm.tm_hour = nextnum();
+ str += 3;
+
+ stm.tm_min = nextnum();
+ str += 3;
+
+ stm.tm_sec = nextnum();
+ str += 2;
+
+ stm.tm_isdst = -1;
+
+ // This is hokey but it's the recommended way ...
+ char *tz = getenv("TZ");
+ snprintf(putenvBufG, sizeof(putenvBufG), "TZ=UTC");
+ putenv(putenvBufG);
+
+ int64_t ret = mktime(&stm);
+
+ snprintf(putenvBufG, sizeof(putenvBufG), "TZ=%s", tz ? tz : "");
+ putenv(putenvBufG);
+
+ // Skip the millis
+
+ if (*str == '.') {
+ str++;
+ while (isdigit(*str)) {
+ str++;
+ }
+ }
+
+ if (checkString(str, "-dd:dd") || checkString(str, "+dd:dd")) {
+ int sign = (*str++ == '-') ? -1 : 1;
+ int hours = nextnum();
+ str += 3;
+ int minutes = nextnum();
+ ret += (-sign * (((hours * 60) + minutes) * 60));
+ }
+ // Else it should be Z to be a conformant time string, but we just assume
+ // that it is rather than enforcing that
+
+ return ret;
+}
+
+
+// Simple ACL format: Lines of this format:
+// Type - ignored
+// Starting with a dash - ignored
+// Email email_address permission
+// UserID user_id (display_name) permission
+// Group Authenticated AWS Users permission
+// Group All Users permission
+// permission is one of READ, WRITE, READ_ACP, WRITE_ACP, FULL_CONTROL
+static int convert_simple_acl(char *aclXml, char *ownerId,
+ char *ownerDisplayName,
+ int *aclGrantCountReturn,
+ S3AclGrant *aclGrants)
+{
+ *aclGrantCountReturn = 0;
+ *ownerId = 0;
+ *ownerDisplayName = 0;
+
+#define SKIP_SPACE(require_more) \
+ do { \
+ while (isspace(*aclXml)) { \
+ aclXml++; \
+ } \
+ if (require_more && !*aclXml) { \
+ return 0; \
+ } \
+ } while (0)
+
+#define COPY_STRING_MAXLEN(field, maxlen) \
+ do { \
+ SKIP_SPACE(1); \
+ int len = 0; \
+ while ((len < maxlen) && !isspace(*aclXml)) { \
+ field[len++] = *aclXml++; \
+ } \
+ field[len] = 0; \
+ } while (0)
+
+#define COPY_STRING(field) \
+ COPY_STRING_MAXLEN(field, (int) (sizeof(field) - 1))
+
+ while (1) {
+ SKIP_SPACE(0);
+
+ if (!*aclXml) {
+ break;
+ }
+
+ // Skip Type lines and dash lines
+ if (!strncmp(aclXml, "Type", sizeof("Type") - 1) ||
+ (*aclXml == '-')) {
+ while (*aclXml && ((*aclXml != '\n') && (*aclXml != '\r'))) {
+ aclXml++;
+ }
+ continue;
+ }
+
+ if (!strncmp(aclXml, "OwnerID", sizeof("OwnerID") - 1)) {
+ aclXml += sizeof("OwnerID") - 1;
+ COPY_STRING_MAXLEN(ownerId, S3_MAX_GRANTEE_USER_ID_SIZE);
+ SKIP_SPACE(1);
+ COPY_STRING_MAXLEN(ownerDisplayName,
+ S3_MAX_GRANTEE_DISPLAY_NAME_SIZE);
+ continue;
+ }
+
+ if (*aclGrantCountReturn == S3_MAX_ACL_GRANT_COUNT) {
+ return 0;
+ }
+
+ S3AclGrant *grant = &(aclGrants[(*aclGrantCountReturn)++]);
+
+ if (!strncmp(aclXml, "Email", sizeof("Email") - 1)) {
+ grant->granteeType = S3GranteeTypeAmazonCustomerByEmail;
+ aclXml += sizeof("Email") - 1;
+ COPY_STRING(grant->grantee.amazonCustomerByEmail.emailAddress);
+ }
+ else if (!strncmp(aclXml, "UserID", sizeof("UserID") - 1)) {
+ grant->granteeType = S3GranteeTypeCanonicalUser;
+ aclXml += sizeof("UserID") - 1;
+ COPY_STRING(grant->grantee.canonicalUser.id);
+ SKIP_SPACE(1);
+ // Now do display name
+ COPY_STRING(grant->grantee.canonicalUser.displayName);
+ }
+ else if (!strncmp(aclXml, "Group", sizeof("Group") - 1)) {
+ aclXml += sizeof("Group") - 1;
+ SKIP_SPACE(1);
+ if (!strncmp(aclXml, "Authenticated AWS Users",
+ sizeof("Authenticated AWS Users") - 1)) {
+ grant->granteeType = S3GranteeTypeAllAwsUsers;
+ aclXml += (sizeof("Authenticated AWS Users") - 1);
+ }
+ else if (!strncmp(aclXml, "All Users", sizeof("All Users") - 1)) {
+ grant->granteeType = S3GranteeTypeAllUsers;
+ aclXml += (sizeof("All Users") - 1);
+ }
+ else if (!strncmp(aclXml, "Log Delivery",
+ sizeof("Log Delivery") - 1)) {
+ grant->granteeType = S3GranteeTypeLogDelivery;
+ aclXml += (sizeof("Log Delivery") - 1);
+ }
+ else {
+ return 0;
+ }
+ }
+ else {
+ return 0;
+ }
+
+ SKIP_SPACE(1);
+
+ if (!strncmp(aclXml, "READ_ACP", sizeof("READ_ACP") - 1)) {
+ grant->permission = S3PermissionReadACP;
+ aclXml += (sizeof("READ_ACP") - 1);
+ }
+ else if (!strncmp(aclXml, "READ", sizeof("READ") - 1)) {
+ grant->permission = S3PermissionRead;
+ aclXml += (sizeof("READ") - 1);
+ }
+ else if (!strncmp(aclXml, "WRITE_ACP", sizeof("WRITE_ACP") - 1)) {
+ grant->permission = S3PermissionWriteACP;
+ aclXml += (sizeof("WRITE_ACP") - 1);
+ }
+ else if (!strncmp(aclXml, "WRITE", sizeof("WRITE") - 1)) {
+ grant->permission = S3PermissionWrite;
+ aclXml += (sizeof("WRITE") - 1);
+ }
+ else if (!strncmp(aclXml, "FULL_CONTROL",
+ sizeof("FULL_CONTROL") - 1)) {
+ grant->permission = S3PermissionFullControl;
+ aclXml += (sizeof("FULL_CONTROL") - 1);
+ }
+ }
+
+ return 1;
+}
+
+
+static int should_retry()
+{
+ if (retriesG--) {
+ // Sleep before next retry; start out with a 1 second sleep
+ static int retrySleepInterval = 1;
+ sleep(retrySleepInterval);
+ // Next sleep 1 second longer
+ retrySleepInterval++;
+ return 1;
+ }
+
+ return 0;
+}
+
+
+static struct option longOptionsG[] =
+{
+ { "force", no_argument, 0, 'f' },
+ { "vhost-style", no_argument, 0, 'h' },
+ { "unencrypted", no_argument, 0, 'u' },
+ { "show-properties", no_argument, 0, 's' },
+ { "retries", required_argument, 0, 'r' },
+ { 0, 0, 0, 0 }
+};
+
+
+// response properties callback ----------------------------------------------
+
+// This callback does the same thing for every request type: prints out the
+// properties if the user has requested them to be so
+static S3Status responsePropertiesCallback
+ (const S3ResponseProperties *properties, void *callbackData)
+{
+ (void) callbackData;
+
+ if (!showResponsePropertiesG) {
+ return S3StatusOK;
+ }
+
+#define print_nonnull(name, field) \
+ do { \
+ if (properties-> field) { \
+ printf("%s: %s\n", name, properties-> field); \
+ } \
+ } while (0)
+
+ print_nonnull("Content-Type", contentType);
+ print_nonnull("Request-Id", requestId);
+ print_nonnull("Request-Id-2", requestId2);
+ if (properties->contentLength > 0) {
+ printf("Content-Length: %lld\n",
+ (unsigned long long) properties->contentLength);
+ }
+ print_nonnull("Server", server);
+ print_nonnull("ETag", eTag);
+ if (properties->lastModified > 0) {
+ char timebuf[256];
+ time_t t = (time_t) properties->lastModified;
+ // gmtime is not thread-safe but we don't care here.
+ strftime(timebuf, sizeof(timebuf), "%Y-%m-%dT%H:%M:%SZ", gmtime(&t));
+ printf("Last-Modified: %s\n", timebuf);
+ }
+ int i;
+ for (i = 0; i < properties->metaDataCount; i++) {
+ printf("x-amz-meta-%s: %s\n", properties->metaData[i].name,
+ properties->metaData[i].value);
+ }
+
+ return S3StatusOK;
+}
+
+
+// response complete callback ------------------------------------------------
+
+// This callback does the same thing for every request type: saves the status
+// and error stuff in global variables
+static void responseCompleteCallback(S3Status status,
+ const S3ErrorDetails *error,
+ void *callbackData)
+{
+ (void) callbackData;
+
+ statusG = status;
+ // Compose the error details message now, although we might not use it.
+ // Can't just save a pointer to [error] since it's not guaranteed to last
+ // beyond this callback
+ int len = 0;
+ if (error && error->message) {
+ len += snprintf(&(errorDetailsG[len]), sizeof(errorDetailsG) - len,
+ " Message: %s\n", error->message);
+ }
+ if (error && error->resource) {
+ len += snprintf(&(errorDetailsG[len]), sizeof(errorDetailsG) - len,
+ " Resource: %s\n", error->resource);
+ }
+ if (error && error->furtherDetails) {
+ len += snprintf(&(errorDetailsG[len]), sizeof(errorDetailsG) - len,
+ " Further Details: %s\n", error->furtherDetails);
+ }
+ if (error && error->extraDetailsCount) {
+ len += snprintf(&(errorDetailsG[len]), sizeof(errorDetailsG) - len,
+ "%s", " Extra Details:\n");
+ int i;
+ for (i = 0; i < error->extraDetailsCount; i++) {
+ len += snprintf(&(errorDetailsG[len]),
+ sizeof(errorDetailsG) - len, " %s: %s\n",
+ error->extraDetails[i].name,
+ error->extraDetails[i].value);
+ }
+ }
+}
+
+
+// list service --------------------------------------------------------------
+
+typedef struct list_service_data
+{
+ int headerPrinted;
+ int allDetails;
+} list_service_data;
+
+
+static void printListServiceHeader(int allDetails)
+{
+ printf("%-56s %-20s", " Bucket",
+ " Created");
+ if (allDetails) {
+ printf(" %-64s %-12s",
+ " Owner ID",
+ "Display Name");
+ }
+ printf("\n");
+ printf("-------------------------------------------------------- "
+ "--------------------");
+ if (allDetails) {
+ printf(" -------------------------------------------------"
+ "--------------- ------------");
+ }
+ printf("\n");
+}
+
+
+static S3Status listServiceCallback(const char *ownerId,
+ const char *ownerDisplayName,
+ const char *bucketName,
+ int64_t creationDate, void *callbackData)
+{
+ list_service_data *data = (list_service_data *) callbackData;
+
+ if (!data->headerPrinted) {
+ data->headerPrinted = 1;
+ printListServiceHeader(data->allDetails);
+ }
+
+ char timebuf[256];
+ if (creationDate >= 0) {
+ time_t t = (time_t) creationDate;
+ strftime(timebuf, sizeof(timebuf), "%Y-%m-%dT%H:%M:%SZ", gmtime(&t));
+ }
+ else {
+ timebuf[0] = 0;
+ }
+
+ printf("%-56s %-20s", bucketName, timebuf);
+ if (data->allDetails) {
+ printf(" %-64s %-12s", ownerId ? ownerId : "",
+ ownerDisplayName ? ownerDisplayName : "");
+ }
+ printf("\n");
+
+ return S3StatusOK;
+}
+
+
+static void list_service(int allDetails)
+{
+ list_service_data data;
+
+ data.headerPrinted = 0;
+ data.allDetails = allDetails;
+
+ S3_init();
+
+ S3ListServiceHandler listServiceHandler =
+ {
+ { &responsePropertiesCallback, &responseCompleteCallback },
+ &listServiceCallback
+ };
+
+ do {
+ S3_list_service(protocolG, accessKeyIdG, secretAccessKeyG, 0,
+ &listServiceHandler, &data);
+ } while (S3_status_is_retryable(statusG) && should_retry());
+
+ if (statusG == S3StatusOK) {
+ if (!data.headerPrinted) {
+ printListServiceHeader(allDetails);
+ }
+ }
+ else {
+ printError();
+ }
+
+ S3_deinitialize();
+}
+
+
+// test bucket ---------------------------------------------------------------
+
+static void test_bucket(int argc, char **argv, int optindex)
+{
+ // test bucket
+ if (optindex == argc) {
+ fprintf(stderr, "\nERROR: Missing parameter: bucket\n");
+ usageExit(stderr);
+ }
+
+ const char *bucketName = argv[optindex++];
+
+ if (optindex != argc) {
+ fprintf(stderr, "\nERROR: Extraneous parameter: %s\n", argv[optindex]);
+ usageExit(stderr);
+ }
+
+ S3_init();
+
+ S3ResponseHandler responseHandler =
+ {
+ &responsePropertiesCallback, &responseCompleteCallback
+ };
+
+ char locationConstraint[64];
+ do {
+ S3_test_bucket(protocolG, uriStyleG, accessKeyIdG, secretAccessKeyG,
+ bucketName, sizeof(locationConstraint),
+ locationConstraint, 0, &responseHandler, 0);
+ } while (S3_status_is_retryable(statusG) && should_retry());
+
+ const char *result;
+
+ switch (statusG) {
+ case S3StatusOK:
+ // bucket exists
+ result = locationConstraint[0] ? locationConstraint : "USA";
+ break;
+ case S3StatusErrorNoSuchBucket:
+ result = "Does Not Exist";
+ break;
+ case S3StatusErrorAccessDenied:
+ result = "Access Denied";
+ break;
+ default:
+ result = 0;
+ break;
+ }
+
+ if (result) {
+ printf("%-56s %-20s\n", " Bucket",
+ " Status");
+ printf("-------------------------------------------------------- "
+ "--------------------\n");
+ printf("%-56s %-20s\n", bucketName, result);
+ }
+ else {
+ printError();
+ }
+
+ S3_deinitialize();
+}
+
+
+// create bucket -------------------------------------------------------------
+
+static void create_bucket(int argc, char **argv, int optindex)
+{
+ if (optindex == argc) {
+ fprintf(stderr, "\nERROR: Missing parameter: bucket\n");
+ usageExit(stderr);
+ }
+
+ const char *bucketName = argv[optindex++];
+
+ if (!forceG && (S3_validate_bucket_name
+ (bucketName, S3UriStyleVirtualHost) != S3StatusOK)) {
+ fprintf(stderr, "\nWARNING: Bucket name is not valid for "
+ "virtual-host style URI access.\n");
+ fprintf(stderr, "Bucket not created. Use -f option to force the "
+ "bucket to be created despite\n");
+ fprintf(stderr, "this warning.\n\n");
+ exit(-1);
+ }
+
+ const char *locationConstraint = 0;
+ S3CannedAcl cannedAcl = S3CannedAclPrivate;
+ while (optindex < argc) {
+ char *param = argv[optindex++];
+ if (!strncmp(param, LOCATION_PREFIX, LOCATION_PREFIX_LEN)) {
+ locationConstraint = &(param[LOCATION_PREFIX_LEN]);
+ }
+ else if (!strncmp(param, CANNED_ACL_PREFIX, CANNED_ACL_PREFIX_LEN)) {
+ char *val = &(param[CANNED_ACL_PREFIX_LEN]);
+ if (!strcmp(val, "private")) {
+ cannedAcl = S3CannedAclPrivate;
+ }
+ else if (!strcmp(val, "public-read")) {
+ cannedAcl = S3CannedAclPublicRead;
+ }
+ else if (!strcmp(val, "public-read-write")) {
+ cannedAcl = S3CannedAclPublicReadWrite;
+ }
+ else if (!strcmp(val, "authenticated-read")) {
+ cannedAcl = S3CannedAclAuthenticatedRead;
+ }
+ else {
+ fprintf(stderr, "\nERROR: Unknown canned ACL: %s\n", val);
+ usageExit(stderr);
+ }
+ }
+ else {
+ fprintf(stderr, "\nERROR: Unknown param: %s\n", param);
+ usageExit(stderr);
+ }
+ }
+
+ S3_init();
+
+ S3ResponseHandler responseHandler =
+ {
+ &responsePropertiesCallback, &responseCompleteCallback
+ };
+
+ do {
+ S3_create_bucket(protocolG, accessKeyIdG, secretAccessKeyG,
+ bucketName, cannedAcl, locationConstraint, 0,
+ &responseHandler, 0);
+ } while (S3_status_is_retryable(statusG) && should_retry());
+
+ if (statusG == S3StatusOK) {
+ printf("Bucket successfully created.\n");
+ }
+ else {
+ printError();
+ }
+
+ S3_deinitialize();
+}
+
+
+// delete bucket -------------------------------------------------------------
+
+static void delete_bucket(int argc, char **argv, int optindex)
+{
+ if (optindex == argc) {
+ fprintf(stderr, "\nERROR: Missing parameter: bucket\n");
+ usageExit(stderr);
+ }
+
+ const char *bucketName = argv[optindex++];
+
+ if (optindex != argc) {
+ fprintf(stderr, "\nERROR: Extraneous parameter: %s\n", argv[optindex]);
+ usageExit(stderr);
+ }
+
+ S3_init();
+
+ S3ResponseHandler responseHandler =
+ {
+ &responsePropertiesCallback, &responseCompleteCallback
+ };
+
+ do {
+ S3_delete_bucket(protocolG, uriStyleG, accessKeyIdG, secretAccessKeyG,
+ bucketName, 0, &responseHandler, 0);
+ } while (S3_status_is_retryable(statusG) && should_retry());
+
+ if (statusG != S3StatusOK) {
+ printError();
+ }
+
+ S3_deinitialize();
+}
+
+
+// list bucket ---------------------------------------------------------------
+
+typedef struct list_bucket_callback_data
+{
+ int isTruncated;
+ char nextMarker[1024];
+ int keyCount;
+ int allDetails;
+} list_bucket_callback_data;
+
+
+static void printListBucketHeader(int allDetails)
+{
+ printf("%-50s %-20s %-5s",
+ " Key",
+ " Last Modified", "Size");
+ if (allDetails) {
+ printf(" %-34s %-64s %-12s",
+ " ETag",
+ " Owner ID",
+ "Display Name");
+ }
+ printf("\n");
+ printf("-------------------------------------------------- "
+ "-------------------- -----");
+ if (allDetails) {
+ printf(" ---------------------------------- "
+ "-------------------------------------------------"
+ "--------------- ------------");
+ }
+ printf("\n");
+}
+
+
+static S3Status listBucketCallback(int isTruncated, const char *nextMarker,
+ int contentsCount,
+ const S3ListBucketContent *contents,
+ int commonPrefixesCount,
+ const char **commonPrefixes,
+ void *callbackData)
+{
+ list_bucket_callback_data *data =
+ (list_bucket_callback_data *) callbackData;
+
+ data->isTruncated = isTruncated;
+ // This is tricky. S3 doesn't return the NextMarker if there is no
+ // delimiter. Why, I don't know, since it's still useful for paging
+ // through results. We want NextMarker to be the last content in the
+ // list, so set it to that if necessary.
+ if ((!nextMarker || !nextMarker[0]) && contentsCount) {
+ nextMarker = contents[contentsCount - 1].key;
+ }
+ if (nextMarker) {
+ snprintf(data->nextMarker, sizeof(data->nextMarker), "%s",
+ nextMarker);
+ }
+ else {
+ data->nextMarker[0] = 0;
+ }
+
+ if (contentsCount && !data->keyCount) {
+ printListBucketHeader(data->allDetails);
+ }
+
+ int i;
+ for (i = 0; i < contentsCount; i++) {
+ const S3ListBucketContent *content = &(contents[i]);
+ char timebuf[256];
+ if (0) {
+ time_t t = (time_t) content->lastModified;
+ strftime(timebuf, sizeof(timebuf), "%Y-%m-%dT%H:%M:%SZ",
+ gmtime(&t));
+ printf("\nKey: %s\n", content->key);
+ printf("Last Modified: %s\n", timebuf);
+ printf("ETag: %s\n", content->eTag);
+ printf("Size: %llu\n", (unsigned long long) content->size);
+ if (content->ownerId) {
+ printf("Owner ID: %s\n", content->ownerId);
+ }
+ if (content->ownerDisplayName) {
+ printf("Owner Display Name: %s\n", content->ownerDisplayName);
+ }
+ }
+ else {
+ time_t t = (time_t) content->lastModified;
+ strftime(timebuf, sizeof(timebuf), "%Y-%m-%dT%H:%M:%SZ",
+ gmtime(&t));
+ char sizebuf[16];
+ if (content->size < 100000) {
+ sprintf(sizebuf, "%5llu", (unsigned long long) content->size);
+ }
+ else if (content->size < (1024 * 1024)) {
+ sprintf(sizebuf, "%4lluK",
+ ((unsigned long long) content->size) / 1024ULL);
+ }
+ else if (content->size < (10 * 1024 * 1024)) {
+ float f = content->size;
+ f /= (1024 * 1024);
+ sprintf(sizebuf, "%1.2fM", f);
+ }
+ else if (content->size < (1024 * 1024 * 1024)) {
+ sprintf(sizebuf, "%4lluM",
+ ((unsigned long long) content->size) /
+ (1024ULL * 1024ULL));
+ }
+ else {
+ float f = (content->size / 1024);
+ f /= (1024 * 1024);
+ sprintf(sizebuf, "%1.2fG", f);
+ }
+ printf("%-50s %s %s", content->key, timebuf, sizebuf);
+ if (data->allDetails) {
+ printf(" %-34s %-64s %-12s",
+ content->eTag,
+ content->ownerId ? content->ownerId : "",
+ content->ownerDisplayName ?
+ content->ownerDisplayName : "");
+ }
+ printf("\n");
+ }
+ }
+
+ data->keyCount += contentsCount;
+
+ for (i = 0; i < commonPrefixesCount; i++) {
+ printf("\nCommon Prefix: %s\n", commonPrefixes[i]);
+ }
+
+ return S3StatusOK;
+}
+
+
+static void list_bucket(const char *bucketName, const char *prefix,
+ const char *marker, const char *delimiter,
+ int maxkeys, int allDetails)
+{
+ S3_init();
+
+ S3BucketContext bucketContext =
+ {
+ bucketName,
+ protocolG,
+ uriStyleG,
+ accessKeyIdG,
+ secretAccessKeyG
+ };
+
+ S3ListBucketHandler listBucketHandler =
+ {
+ { &responsePropertiesCallback, &responseCompleteCallback },
+ &listBucketCallback
+ };
+
+ list_bucket_callback_data data;
+
+ snprintf(data.nextMarker, sizeof(data.nextMarker), "%s", marker);
+ data.keyCount = 0;
+ data.allDetails = allDetails;
+
+ do {
+ data.isTruncated = 0;
+ do {
+ S3_list_bucket(&bucketContext, prefix, data.nextMarker,
+ delimiter, maxkeys, 0, &listBucketHandler, &data);
+ } while (S3_status_is_retryable(statusG) && should_retry());
+ if (statusG != S3StatusOK) {
+ break;
+ }
+ marker = data.nextMarker;
+ } while (data.isTruncated && (!maxkeys || (data.keyCount < maxkeys)));
+
+ if (statusG == S3StatusOK) {
+ if (!data.keyCount) {
+ printListBucketHeader(allDetails);
+ }
+ }
+ else {
+ printError();
+ }
+
+ S3_deinitialize();
+}
+
+
+static void list(int argc, char **argv, int optindex)
+{
+ if (optindex == argc) {
+ list_service(0);
+ return;
+ }
+
+ const char *bucketName = 0;
+
+ const char *prefix = 0, *marker = 0, *delimiter = 0;
+ int maxkeys = 0, allDetails = 0;
+ while (optindex < argc) {
+ char *param = argv[optindex++];
+ if (!strncmp(param, PREFIX_PREFIX, PREFIX_PREFIX_LEN)) {
+ prefix = &(param[PREFIX_PREFIX_LEN]);
+ }
+ else if (!strncmp(param, MARKER_PREFIX, MARKER_PREFIX_LEN)) {
+ marker = &(param[MARKER_PREFIX_LEN]);
+ }
+ else if (!strncmp(param, DELIMITER_PREFIX, DELIMITER_PREFIX_LEN)) {
+ delimiter = &(param[DELIMITER_PREFIX_LEN]);
+ }
+ else if (!strncmp(param, MAXKEYS_PREFIX, MAXKEYS_PREFIX_LEN)) {
+ maxkeys = convertInt(&(param[MAXKEYS_PREFIX_LEN]), "maxkeys");
+ }
+ else if (!strncmp(param, ALL_DETAILS_PREFIX,
+ ALL_DETAILS_PREFIX_LEN)) {
+ const char *ad = &(param[ALL_DETAILS_PREFIX_LEN]);
+ if (!strcmp(ad, "true") || !strcmp(ad, "TRUE") ||
+ !strcmp(ad, "yes") || !strcmp(ad, "YES") ||
+ !strcmp(ad, "1")) {
+ allDetails = 1;
+ }
+ }
+ else if (!bucketName) {
+ bucketName = param;
+ }
+ else {
+ fprintf(stderr, "\nERROR: Unknown param: %s\n", param);
+ usageExit(stderr);
+ }
+ }
+
+ if (bucketName) {
+ list_bucket(bucketName, prefix, marker, delimiter, maxkeys,
+ allDetails);
+ }
+ else {
+ list_service(allDetails);
+ }
+}
+
+
+
+// delete object -------------------------------------------------------------
+
+static void delete_object(int argc, char **argv, int optindex)
+{
+ (void) argc;
+
+ // Split bucket/key
+ char *slash = argv[optindex];
+
+ // We know there is a slash in there, put_object is only called if so
+ while (*slash && (*slash != '/')) {
+ slash++;
+ }
+ *slash++ = 0;
+
+ const char *bucketName = argv[optindex++];
+ const char *key = slash;
+
+ S3_init();
+
+ S3BucketContext bucketContext =
+ {
+ bucketName,
+ protocolG,
+ uriStyleG,
+ accessKeyIdG,
+ secretAccessKeyG
+ };
+
+ S3ResponseHandler responseHandler =
+ {
+ 0,
+ &responseCompleteCallback
+ };
+
+ do {
+ S3_delete_object(&bucketContext, key, 0, &responseHandler, 0);
+ } while (S3_status_is_retryable(statusG) && should_retry());
+
+ if ((statusG != S3StatusOK) &&
+ (statusG != S3StatusErrorPreconditionFailed)) {
+ printError();
+ }
+
+ S3_deinitialize();
+}
+
+
+// put object ----------------------------------------------------------------
+
+typedef struct put_object_callback_data
+{
+ FILE *infile;
+ growbuffer *gb;
+ uint64_t contentLength, originalContentLength;
+ int noStatus;
+} put_object_callback_data;
+
+
+static int putObjectDataCallback(int bufferSize, char *buffer,
+ void *callbackData)
+{
+ put_object_callback_data *data =
+ (put_object_callback_data *) callbackData;
+
+ int ret = 0;
+
+ if (data->contentLength) {
+ int toRead = ((data->contentLength > (unsigned) bufferSize) ?
+ (unsigned) bufferSize : data->contentLength);
+ if (data->gb) {
+ growbuffer_read(&(data->gb), toRead, &ret, buffer);
+ }
+ else if (data->infile) {
+ ret = fread(buffer, 1, toRead, data->infile);
+ }
+ }
+
+ data->contentLength -= ret;
+
+ if (data->contentLength && !data->noStatus) {
+ // Avoid a weird bug in MingW, which won't print the second integer
+ // value properly when it's in the same call, so print separately
+ printf("%llu bytes remaining ",
+ (unsigned long long) data->contentLength);
+ printf("(%d%% complete) ...\n",
+ (int) (((data->originalContentLength -
+ data->contentLength) * 100) /
+ data->originalContentLength));
+ }
+
+ return ret;
+}
+
+
+static void put_object(int argc, char **argv, int optindex)
+{
+ if (optindex == argc) {
+ fprintf(stderr, "\nERROR: Missing parameter: bucket/key\n");
+ usageExit(stderr);
+ }
+
+ // Split bucket/key
+ char *slash = argv[optindex];
+ while (*slash && (*slash != '/')) {
+ slash++;
+ }
+ if (!*slash || !*(slash + 1)) {
+ fprintf(stderr, "\nERROR: Invalid bucket/key name: %s\n",
+ argv[optindex]);
+ usageExit(stderr);
+ }
+ *slash++ = 0;
+
+ const char *bucketName = argv[optindex++];
+ const char *key = slash;
+
+ const char *filename = 0;
+ uint64_t contentLength = 0;
+ const char *cacheControl = 0, *contentType = 0, *md5 = 0;
+ const char *contentDispositionFilename = 0, *contentEncoding = 0;
+ int64_t expires = -1;
+ S3CannedAcl cannedAcl = S3CannedAclPrivate;
+ int metaPropertiesCount = 0;
+ S3NameValue metaProperties[S3_MAX_METADATA_COUNT];
+ int noStatus = 0;
+
+ while (optindex < argc) {
+ char *param = argv[optindex++];
+ if (!strncmp(param, FILENAME_PREFIX, FILENAME_PREFIX_LEN)) {
+ filename = &(param[FILENAME_PREFIX_LEN]);
+ }
+ else if (!strncmp(param, CONTENT_LENGTH_PREFIX,
+ CONTENT_LENGTH_PREFIX_LEN)) {
+ contentLength = convertInt(&(param[CONTENT_LENGTH_PREFIX_LEN]),
+ "contentLength");
+ if (contentLength > (5LL * 1024 * 1024 * 1024)) {
+ fprintf(stderr, "\nERROR: contentLength must be no greater "
+ "than 5 GB\n");
+ usageExit(stderr);
+ }
+ }
+ else if (!strncmp(param, CACHE_CONTROL_PREFIX,
+ CACHE_CONTROL_PREFIX_LEN)) {
+ cacheControl = &(param[CACHE_CONTROL_PREFIX_LEN]);
+ }
+ else if (!strncmp(param, CONTENT_TYPE_PREFIX,
+ CONTENT_TYPE_PREFIX_LEN)) {
+ contentType = &(param[CONTENT_TYPE_PREFIX_LEN]);
+ }
+ else if (!strncmp(param, MD5_PREFIX, MD5_PREFIX_LEN)) {
+ md5 = &(param[MD5_PREFIX_LEN]);
+ }
+ else if (!strncmp(param, CONTENT_DISPOSITION_FILENAME_PREFIX,
+ CONTENT_DISPOSITION_FILENAME_PREFIX_LEN)) {
+ contentDispositionFilename =
+ &(param[CONTENT_DISPOSITION_FILENAME_PREFIX_LEN]);
+ }
+ else if (!strncmp(param, CONTENT_ENCODING_PREFIX,
+ CONTENT_ENCODING_PREFIX_LEN)) {
+ contentEncoding = &(param[CONTENT_ENCODING_PREFIX_LEN]);
+ }
+ else if (!strncmp(param, EXPIRES_PREFIX, EXPIRES_PREFIX_LEN)) {
+ expires = parseIso8601Time(&(param[EXPIRES_PREFIX_LEN]));
+ if (expires < 0) {
+ fprintf(stderr, "\nERROR: Invalid expires time "
+ "value; ISO 8601 time format required\n");
+ usageExit(stderr);
+ }
+ }
+ else if (!strncmp(param, X_AMZ_META_PREFIX, X_AMZ_META_PREFIX_LEN)) {
+ if (metaPropertiesCount == S3_MAX_METADATA_COUNT) {
+ fprintf(stderr, "\nERROR: Too many x-amz-meta- properties, "
+ "limit %lu: %s\n",
+ (unsigned long) S3_MAX_METADATA_COUNT, param);
+ usageExit(stderr);
+ }
+ char *name = &(param[X_AMZ_META_PREFIX_LEN]);
+ char *value = name;
+ while (*value && (*value != '=')) {
+ value++;
+ }
+ if (!*value || !*(value + 1)) {
+ fprintf(stderr, "\nERROR: Invalid parameter: %s\n", param);
+ usageExit(stderr);
+ }
+ *value++ = 0;
+ metaProperties[metaPropertiesCount].name = name;
+ metaProperties[metaPropertiesCount++].value = value;
+ }
+ else if (!strncmp(param, CANNED_ACL_PREFIX, CANNED_ACL_PREFIX_LEN)) {
+ char *val = &(param[CANNED_ACL_PREFIX_LEN]);
+ if (!strcmp(val, "private")) {
+ cannedAcl = S3CannedAclPrivate;
+ }
+ else if (!strcmp(val, "public-read")) {
+ cannedAcl = S3CannedAclPublicRead;
+ }
+ else if (!strcmp(val, "public-read-write")) {
+ cannedAcl = S3CannedAclPublicReadWrite;
+ }
+ else if (!strcmp(val, "authenticated-read")) {
+ cannedAcl = S3CannedAclAuthenticatedRead;
+ }
+ else {
+ fprintf(stderr, "\nERROR: Unknown canned ACL: %s\n", val);
+ usageExit(stderr);
+ }
+ }
+ else if (!strncmp(param, NO_STATUS_PREFIX, NO_STATUS_PREFIX_LEN)) {
+ const char *ns = &(param[NO_STATUS_PREFIX_LEN]);
+ if (!strcmp(ns, "true") || !strcmp(ns, "TRUE") ||
+ !strcmp(ns, "yes") || !strcmp(ns, "YES") ||
+ !strcmp(ns, "1")) {
+ noStatus = 1;
+ }
+ }
+ else {
+ fprintf(stderr, "\nERROR: Unknown param: %s\n", param);
+ usageExit(stderr);
+ }
+ }
+
+ put_object_callback_data data;
+
+ data.infile = 0;
+ data.gb = 0;
+ data.noStatus = noStatus;
+
+ if (filename) {
+ if (!contentLength) {
+ struct stat statbuf;
+ // Stat the file to get its length
+ if (stat(filename, &statbuf) == -1) {
+ fprintf(stderr, "\nERROR: Failed to stat file %s: ",
+ filename);
+ perror(0);
+ exit(-1);
+ }
+ contentLength = statbuf.st_size;
+ }
+ // Open the file
+ if (!(data.infile = fopen(filename, "r" FOPEN_EXTRA_FLAGS))) {
+ fprintf(stderr, "\nERROR: Failed to open input file %s: ",
+ filename);
+ perror(0);
+ exit(-1);
+ }
+ }
+ else {
+ // Read from stdin. If contentLength is not provided, we have
+ // to read it all in to get contentLength.
+ if (!contentLength) {
+ // Read all if stdin to get the data
+ char buffer[64 * 1024];
+ while (1) {
+ int amtRead = fread(buffer, 1, sizeof(buffer), stdin);
+ if (amtRead == 0) {
+ break;
+ }
+ if (!growbuffer_append(&(data.gb), buffer, amtRead)) {
+ fprintf(stderr, "\nERROR: Out of memory while reading "
+ "stdin\n");
+ exit(-1);
+ }
+ contentLength += amtRead;
+ if (amtRead < (int) sizeof(buffer)) {
+ break;
+ }
+ }
+ }
+ else {
+ data.infile = stdin;
+ }
+ }
+
+ data.contentLength = data.originalContentLength = contentLength;
+
+ S3_init();
+
+ S3BucketContext bucketContext =
+ {
+ bucketName,
+ protocolG,
+ uriStyleG,
+ accessKeyIdG,
+ secretAccessKeyG
+ };
+
+ S3PutProperties putProperties =
+ {
+ contentType,
+ md5,
+ cacheControl,
+ contentDispositionFilename,
+ contentEncoding,
+ expires,
+ cannedAcl,
+ metaPropertiesCount,
+ metaProperties
+ };
+
+ S3PutObjectHandler putObjectHandler =
+ {
+ { &responsePropertiesCallback, &responseCompleteCallback },
+ &putObjectDataCallback
+ };
+
+ do {
+ S3_put_object(&bucketContext, key, contentLength, &putProperties, 0,
+ &putObjectHandler, &data);
+ } while (S3_status_is_retryable(statusG) && should_retry());
+
+ if (data.infile) {
+ fclose(data.infile);
+ }
+ else if (data.gb) {
+ growbuffer_destroy(data.gb);
+ }
+
+ if (statusG != S3StatusOK) {
+ printError();
+ }
+ else if (data.contentLength) {
+ fprintf(stderr, "\nERROR: Failed to read remaining %llu bytes from "
+ "input\n", (unsigned long long) data.contentLength);
+ }
+
+ S3_deinitialize();
+}
+
+
+// copy object ---------------------------------------------------------------
+
+static void copy_object(int argc, char **argv, int optindex)
+{
+ if (optindex == argc) {
+ fprintf(stderr, "\nERROR: Missing parameter: source bucket/key\n");
+ usageExit(stderr);
+ }
+
+ // Split bucket/key
+ char *slash = argv[optindex];
+ while (*slash && (*slash != '/')) {
+ slash++;
+ }
+ if (!*slash || !*(slash + 1)) {
+ fprintf(stderr, "\nERROR: Invalid source bucket/key name: %s\n",
+ argv[optindex]);
+ usageExit(stderr);
+ }
+ *slash++ = 0;
+
+ const char *sourceBucketName = argv[optindex++];
+ const char *sourceKey = slash;
+
+ if (optindex == argc) {
+ fprintf(stderr, "\nERROR: Missing parameter: "
+ "destination bucket/key\n");
+ usageExit(stderr);
+ }
+
+ // Split bucket/key
+ slash = argv[optindex];
+ while (*slash && (*slash != '/')) {
+ slash++;
+ }
+ if (!*slash || !*(slash + 1)) {
+ fprintf(stderr, "\nERROR: Invalid destination bucket/key name: %s\n",
+ argv[optindex]);
+ usageExit(stderr);
+ }
+ *slash++ = 0;
+
+ const char *destinationBucketName = argv[optindex++];
+ const char *destinationKey = slash;
+
+ const char *cacheControl = 0, *contentType = 0;
+ const char *contentDispositionFilename = 0, *contentEncoding = 0;
+ int64_t expires = -1;
+ S3CannedAcl cannedAcl = S3CannedAclPrivate;
+ int metaPropertiesCount = 0;
+ S3NameValue metaProperties[S3_MAX_METADATA_COUNT];
+ int anyPropertiesSet = 0;
+
+ while (optindex < argc) {
+ char *param = argv[optindex++];
+ if (!strncmp(param, CACHE_CONTROL_PREFIX,
+ CACHE_CONTROL_PREFIX_LEN)) {
+ cacheControl = &(param[CACHE_CONTROL_PREFIX_LEN]);
+ anyPropertiesSet = 1;
+ }
+ else if (!strncmp(param, CONTENT_TYPE_PREFIX,
+ CONTENT_TYPE_PREFIX_LEN)) {
+ contentType = &(param[CONTENT_TYPE_PREFIX_LEN]);
+ anyPropertiesSet = 1;
+ }
+ else if (!strncmp(param, CONTENT_DISPOSITION_FILENAME_PREFIX,
+ CONTENT_DISPOSITION_FILENAME_PREFIX_LEN)) {
+ contentDispositionFilename =
+ &(param[CONTENT_DISPOSITION_FILENAME_PREFIX_LEN]);
+ anyPropertiesSet = 1;
+ }
+ else if (!strncmp(param, CONTENT_ENCODING_PREFIX,
+ CONTENT_ENCODING_PREFIX_LEN)) {
+ contentEncoding = &(param[CONTENT_ENCODING_PREFIX_LEN]);
+ anyPropertiesSet = 1;
+ }
+ else if (!strncmp(param, EXPIRES_PREFIX, EXPIRES_PREFIX_LEN)) {
+ expires = parseIso8601Time(&(param[EXPIRES_PREFIX_LEN]));
+ if (expires < 0) {
+ fprintf(stderr, "\nERROR: Invalid expires time "
+ "value; ISO 8601 time format required\n");
+ usageExit(stderr);
+ }
+ anyPropertiesSet = 1;
+ }
+ else if (!strncmp(param, X_AMZ_META_PREFIX, X_AMZ_META_PREFIX_LEN)) {
+ if (metaPropertiesCount == S3_MAX_METADATA_COUNT) {
+ fprintf(stderr, "\nERROR: Too many x-amz-meta- properties, "
+ "limit %lu: %s\n",
+ (unsigned long) S3_MAX_METADATA_COUNT, param);
+ usageExit(stderr);
+ }
+ char *name = &(param[X_AMZ_META_PREFIX_LEN]);
+ char *value = name;
+ while (*value && (*value != '=')) {
+ value++;
+ }
+ if (!*value || !*(value + 1)) {
+ fprintf(stderr, "\nERROR: Invalid parameter: %s\n", param);
+ usageExit(stderr);
+ }
+ *value++ = 0;
+ metaProperties[metaPropertiesCount].name = name;
+ metaProperties[metaPropertiesCount++].value = value;
+ anyPropertiesSet = 1;
+ }
+ else if (!strncmp(param, CANNED_ACL_PREFIX, CANNED_ACL_PREFIX_LEN)) {
+ char *val = &(param[CANNED_ACL_PREFIX_LEN]);
+ if (!strcmp(val, "private")) {
+ cannedAcl = S3CannedAclPrivate;
+ }
+ else if (!strcmp(val, "public-read")) {
+ cannedAcl = S3CannedAclPublicRead;
+ }
+ else if (!strcmp(val, "public-read-write")) {
+ cannedAcl = S3CannedAclPublicReadWrite;
+ }
+ else if (!strcmp(val, "authenticated-read")) {
+ cannedAcl = S3CannedAclAuthenticatedRead;
+ }
+ else {
+ fprintf(stderr, "\nERROR: Unknown canned ACL: %s\n", val);
+ usageExit(stderr);
+ }
+ anyPropertiesSet = 1;
+ }
+ else {
+ fprintf(stderr, "\nERROR: Unknown param: %s\n", param);
+ usageExit(stderr);
+ }
+ }
+
+ S3_init();
+
+ S3BucketContext bucketContext =
+ {
+ sourceBucketName,
+ protocolG,
+ uriStyleG,
+ accessKeyIdG,
+ secretAccessKeyG
+ };
+
+ S3PutProperties putProperties =
+ {
+ contentType,
+ 0,
+ cacheControl,
+ contentDispositionFilename,
+ contentEncoding,
+ expires,
+ cannedAcl,
+ metaPropertiesCount,
+ metaProperties
+ };
+
+ S3ResponseHandler responseHandler =
+ {
+ &responsePropertiesCallback,
+ &responseCompleteCallback
+ };
+
+ int64_t lastModified;
+ char eTag[256];
+
+ do {
+ S3_copy_object(&bucketContext, sourceKey, destinationBucketName,
+ destinationKey, anyPropertiesSet ? &putProperties : 0,
+ &lastModified, sizeof(eTag), eTag, 0,
+ &responseHandler, 0);
+ } while (S3_status_is_retryable(statusG) && should_retry());
+
+ if (statusG == S3StatusOK) {
+ if (lastModified >= 0) {
+ char timebuf[256];
+ time_t t = (time_t) lastModified;
+ strftime(timebuf, sizeof(timebuf), "%Y-%m-%dT%H:%M:%SZ",
+ gmtime(&t));
+ printf("Last-Modified: %s\n", timebuf);
+ }
+ if (eTag[0]) {
+ printf("ETag: %s\n", eTag);
+ }
+ }
+ else {
+ printError();
+ }
+
+ S3_deinitialize();
+}
+
+
+// get object ----------------------------------------------------------------
+
+static S3Status getObjectDataCallback(int bufferSize, const char *buffer,
+ void *callbackData)
+{
+ FILE *outfile = (FILE *) callbackData;
+
+ size_t wrote = fwrite(buffer, 1, bufferSize, outfile);
+
+ return ((wrote < (size_t) bufferSize) ?
+ S3StatusAbortedByCallback : S3StatusOK);
+}
+
+
+static void get_object(int argc, char **argv, int optindex)
+{
+ if (optindex == argc) {
+ fprintf(stderr, "\nERROR: Missing parameter: bucket/key\n");
+ usageExit(stderr);
+ }
+
+ // Split bucket/key
+ char *slash = argv[optindex];
+ while (*slash && (*slash != '/')) {
+ slash++;
+ }
+ if (!*slash || !*(slash + 1)) {
+ fprintf(stderr, "\nERROR: Invalid bucket/key name: %s\n",
+ argv[optindex]);
+ usageExit(stderr);
+ }
+ *slash++ = 0;
+
+ const char *bucketName = argv[optindex++];
+ const char *key = slash;
+
+ const char *filename = 0;
+ int64_t ifModifiedSince = -1, ifNotModifiedSince = -1;
+ const char *ifMatch = 0, *ifNotMatch = 0;
+ uint64_t startByte = 0, byteCount = 0;
+
+ while (optindex < argc) {
+ char *param = argv[optindex++];
+ if (!strncmp(param, FILENAME_PREFIX, FILENAME_PREFIX_LEN)) {
+ filename = &(param[FILENAME_PREFIX_LEN]);
+ }
+ else if (!strncmp(param, IF_MODIFIED_SINCE_PREFIX,
+ IF_MODIFIED_SINCE_PREFIX_LEN)) {
+ // Parse ifModifiedSince
+ ifModifiedSince = parseIso8601Time
+ (&(param[IF_MODIFIED_SINCE_PREFIX_LEN]));
+ if (ifModifiedSince < 0) {
+ fprintf(stderr, "\nERROR: Invalid ifModifiedSince time "
+ "value; ISO 8601 time format required\n");
+ usageExit(stderr);
+ }
+ }
+ else if (!strncmp(param, IF_NOT_MODIFIED_SINCE_PREFIX,
+ IF_NOT_MODIFIED_SINCE_PREFIX_LEN)) {
+ // Parse ifModifiedSince
+ ifNotModifiedSince = parseIso8601Time
+ (&(param[IF_NOT_MODIFIED_SINCE_PREFIX_LEN]));
+ if (ifNotModifiedSince < 0) {
+ fprintf(stderr, "\nERROR: Invalid ifNotModifiedSince time "
+ "value; ISO 8601 time format required\n");
+ usageExit(stderr);
+ }
+ }
+ else if (!strncmp(param, IF_MATCH_PREFIX, IF_MATCH_PREFIX_LEN)) {
+ ifMatch = &(param[IF_MATCH_PREFIX_LEN]);
+ }
+ else if (!strncmp(param, IF_NOT_MATCH_PREFIX,
+ IF_NOT_MATCH_PREFIX_LEN)) {
+ ifNotMatch = &(param[IF_NOT_MATCH_PREFIX_LEN]);
+ }
+ else if (!strncmp(param, START_BYTE_PREFIX, START_BYTE_PREFIX_LEN)) {
+ startByte = convertInt
+ (&(param[START_BYTE_PREFIX_LEN]), "startByte");
+ }
+ else if (!strncmp(param, BYTE_COUNT_PREFIX, BYTE_COUNT_PREFIX_LEN)) {
+ byteCount = convertInt
+ (&(param[BYTE_COUNT_PREFIX_LEN]), "byteCount");
+ }
+ else {
+ fprintf(stderr, "\nERROR: Unknown param: %s\n", param);
+ usageExit(stderr);
+ }
+ }
+
+ FILE *outfile = 0;
+
+ if (filename) {
+ // Stat the file, and if it doesn't exist, open it in w mode
+ struct stat buf;
+ if (stat(filename, &buf) == -1) {
+ outfile = fopen(filename, "w" FOPEN_EXTRA_FLAGS);
+ }
+ else {
+ // Open in r+ so that we don't truncate the file, just in case
+ // there is an error and we write no bytes, we leave the file
+ // unmodified
+ outfile = fopen(filename, "r+" FOPEN_EXTRA_FLAGS);
+ }
+
+ if (!outfile) {
+ fprintf(stderr, "\nERROR: Failed to open output file %s: ",
+ filename);
+ perror(0);
+ exit(-1);
+ }
+ }
+ else if (showResponsePropertiesG) {
+ fprintf(stderr, "\nERROR: get -s requires a filename parameter\n");
+ usageExit(stderr);
+ }
+ else {
+ outfile = stdout;
+ }
+
+ S3_init();
+
+ S3BucketContext bucketContext =
+ {
+ bucketName,
+ protocolG,
+ uriStyleG,
+ accessKeyIdG,
+ secretAccessKeyG
+ };
+
+ S3GetConditions getConditions =
+ {
+ ifModifiedSince,
+ ifNotModifiedSince,
+ ifMatch,
+ ifNotMatch
+ };
+
+ S3GetObjectHandler getObjectHandler =
+ {
+ { &responsePropertiesCallback, &responseCompleteCallback },
+ &getObjectDataCallback
+ };
+
+ do {
+ S3_get_object(&bucketContext, key, &getConditions, startByte,
+ byteCount, 0, &getObjectHandler, outfile);
+ } while (S3_status_is_retryable(statusG) && should_retry());
+
+ if (statusG != S3StatusOK) {
+ printError();
+ }
+
+ fclose(outfile);
+
+ S3_deinitialize();
+}
+
+
+// head object ---------------------------------------------------------------
+
+static void head_object(int argc, char **argv, int optindex)
+{
+ if (optindex == argc) {
+ fprintf(stderr, "\nERROR: Missing parameter: bucket/key\n");
+ usageExit(stderr);
+ }
+
+ // Head implies showing response properties
+ showResponsePropertiesG = 1;
+
+ // Split bucket/key
+ char *slash = argv[optindex];
+
+ while (*slash && (*slash != '/')) {
+ slash++;
+ }
+ if (!*slash || !*(slash + 1)) {
+ fprintf(stderr, "\nERROR: Invalid bucket/key name: %s\n",
+ argv[optindex]);
+ usageExit(stderr);
+ }
+ *slash++ = 0;
+
+ const char *bucketName = argv[optindex++];
+ const char *key = slash;
+
+ if (optindex != argc) {
+ fprintf(stderr, "\nERROR: Extraneous parameter: %s\n", argv[optindex]);
+ usageExit(stderr);
+ }
+
+ S3_init();
+
+ S3BucketContext bucketContext =
+ {
+ bucketName,
+ protocolG,
+ uriStyleG,
+ accessKeyIdG,
+ secretAccessKeyG
+ };
+
+ S3ResponseHandler responseHandler =
+ {
+ &responsePropertiesCallback,
+ &responseCompleteCallback
+ };
+
+ do {
+ S3_head_object(&bucketContext, key, 0, &responseHandler, 0);
+ } while (S3_status_is_retryable(statusG) && should_retry());
+
+ if ((statusG != S3StatusOK) &&
+ (statusG != S3StatusErrorPreconditionFailed)) {
+ printError();
+ }
+
+ S3_deinitialize();
+}
+
+
+// generate query string ------------------------------------------------------
+
+static void generate_query_string(int argc, char **argv, int optindex)
+{
+ if (optindex == argc) {
+ fprintf(stderr, "\nERROR: Missing parameter: bucket[/key]\n");
+ usageExit(stderr);
+ }
+
+ const char *bucketName = argv[optindex];
+ const char *key = 0;
+
+ // Split bucket/key
+ char *slash = argv[optindex++];
+ while (*slash && (*slash != '/')) {
+ slash++;
+ }
+ if (*slash) {
+ *slash++ = 0;
+ key = slash;
+ }
+ else {
+ key = 0;
+ }
+
+ int64_t expires = -1;
+
+ const char *resource = 0;
+
+ while (optindex < argc) {
+ char *param = argv[optindex++];
+ if (!strncmp(param, EXPIRES_PREFIX, EXPIRES_PREFIX_LEN)) {
+ expires = parseIso8601Time(&(param[EXPIRES_PREFIX_LEN]));
+ if (expires < 0) {
+ fprintf(stderr, "\nERROR: Invalid expires time "
+ "value; ISO 8601 time format required\n");
+ usageExit(stderr);
+ }
+ }
+ else if (!strncmp(param, RESOURCE_PREFIX, RESOURCE_PREFIX_LEN)) {
+ resource = &(param[RESOURCE_PREFIX_LEN]);
+ }
+ else {
+ fprintf(stderr, "\nERROR: Unknown param: %s\n", param);
+ usageExit(stderr);
+ }
+ }
+
+ S3_init();
+
+ S3BucketContext bucketContext =
+ {
+ bucketName,
+ protocolG,
+ uriStyleG,
+ accessKeyIdG,
+ secretAccessKeyG
+ };
+
+ char buffer[S3_MAX_AUTHENTICATED_QUERY_STRING_SIZE];
+
+ S3Status status = S3_generate_authenticated_query_string
+ (buffer, &bucketContext, key, expires, resource);
+
+ if (status != S3StatusOK) {
+ printf("Failed to generate authenticated query string: %s\n",
+ S3_get_status_name(status));
+ }
+ else {
+ printf("%s\n", buffer);
+ }
+
+ S3_deinitialize();
+}
+
+
+// get acl -------------------------------------------------------------------
+
+void get_acl(int argc, char **argv, int optindex)
+{
+ if (optindex == argc) {
+ fprintf(stderr, "\nERROR: Missing parameter: bucket[/key]\n");
+ usageExit(stderr);
+ }
+
+ const char *bucketName = argv[optindex];
+ const char *key = 0;
+
+ // Split bucket/key
+ char *slash = argv[optindex++];
+ while (*slash && (*slash != '/')) {
+ slash++;
+ }
+ if (*slash) {
+ *slash++ = 0;
+ key = slash;
+ }
+ else {
+ key = 0;
+ }
+
+ const char *filename = 0;
+
+ while (optindex < argc) {
+ char *param = argv[optindex++];
+ if (!strncmp(param, FILENAME_PREFIX, FILENAME_PREFIX_LEN)) {
+ filename = &(param[FILENAME_PREFIX_LEN]);
+ }
+ else {
+ fprintf(stderr, "\nERROR: Unknown param: %s\n", param);
+ usageExit(stderr);
+ }
+ }
+
+ FILE *outfile = 0;
+
+ if (filename) {
+ // Stat the file, and if it doesn't exist, open it in w mode
+ struct stat buf;
+ if (stat(filename, &buf) == -1) {
+ outfile = fopen(filename, "w" FOPEN_EXTRA_FLAGS);
+ }
+ else {
+ // Open in r+ so that we don't truncate the file, just in case
+ // there is an error and we write no bytes, we leave the file
+ // unmodified
+ outfile = fopen(filename, "r+" FOPEN_EXTRA_FLAGS);
+ }
+
+ if (!outfile) {
+ fprintf(stderr, "\nERROR: Failed to open output file %s: ",
+ filename);
+ perror(0);
+ exit(-1);
+ }
+ }
+ else if (showResponsePropertiesG) {
+ fprintf(stderr, "\nERROR: getacl -s requires a filename parameter\n");
+ usageExit(stderr);
+ }
+ else {
+ outfile = stdout;
+ }
+
+ int aclGrantCount;
+ S3AclGrant aclGrants[S3_MAX_ACL_GRANT_COUNT];
+ char ownerId[S3_MAX_GRANTEE_USER_ID_SIZE];
+ char ownerDisplayName[S3_MAX_GRANTEE_DISPLAY_NAME_SIZE];
+
+ S3_init();
+
+ S3BucketContext bucketContext =
+ {
+ bucketName,
+ protocolG,
+ uriStyleG,
+ accessKeyIdG,
+ secretAccessKeyG
+ };
+
+ S3ResponseHandler responseHandler =
+ {
+ &responsePropertiesCallback,
+ &responseCompleteCallback
+ };
+
+ do {
+ S3_get_acl(&bucketContext, key, ownerId, ownerDisplayName,
+ &aclGrantCount, aclGrants, 0, &responseHandler, 0);
+ } while (S3_status_is_retryable(statusG) && should_retry());
+
+ if (statusG == S3StatusOK) {
+ fprintf(outfile, "OwnerID %s %s\n", ownerId, ownerDisplayName);
+ fprintf(outfile, "%-6s %-90s %-12s\n", " Type",
+ " User Identifier",
+ " Permission");
+ fprintf(outfile, "------ "
+ "------------------------------------------------------------"
+ "------------------------------ ------------\n");
+ int i;
+ for (i = 0; i < aclGrantCount; i++) {
+ S3AclGrant *grant = &(aclGrants[i]);
+ const char *type;
+ char composedId[S3_MAX_GRANTEE_USER_ID_SIZE +
+ S3_MAX_GRANTEE_DISPLAY_NAME_SIZE + 16];
+ const char *id;
+
+ switch (grant->granteeType) {
+ case S3GranteeTypeAmazonCustomerByEmail:
+ type = "Email";
+ id = grant->grantee.amazonCustomerByEmail.emailAddress;
+ break;
+ case S3GranteeTypeCanonicalUser:
+ type = "UserID";
+ snprintf(composedId, sizeof(composedId),
+ "%s (%s)", grant->grantee.canonicalUser.id,
+ grant->grantee.canonicalUser.displayName);
+ id = composedId;
+ break;
+ case S3GranteeTypeAllAwsUsers:
+ type = "Group";
+ id = "Authenticated AWS Users";
+ break;
+ case S3GranteeTypeAllUsers:
+ type = "Group";
+ id = "All Users";
+ break;
+ default:
+ type = "Group";
+ id = "Log Delivery";
+ break;
+ }
+ const char *perm;
+ switch (grant->permission) {
+ case S3PermissionRead:
+ perm = "READ";
+ break;
+ case S3PermissionWrite:
+ perm = "WRITE";
+ break;
+ case S3PermissionReadACP:
+ perm = "READ_ACP";
+ break;
+ case S3PermissionWriteACP:
+ perm = "WRITE_ACP";
+ break;
+ default:
+ perm = "FULL_CONTROL";
+ break;
+ }
+ fprintf(outfile, "%-6s %-90s %-12s\n", type, id, perm);
+ }
+ }
+ else {
+ printError();
+ }
+
+ fclose(outfile);
+
+ S3_deinitialize();
+}
+
+
+// set acl -------------------------------------------------------------------
+
+void set_acl(int argc, char **argv, int optindex)
+{
+ if (optindex == argc) {
+ fprintf(stderr, "\nERROR: Missing parameter: bucket[/key]\n");
+ usageExit(stderr);
+ }
+
+ const char *bucketName = argv[optindex];
+ const char *key = 0;
+
+ // Split bucket/key
+ char *slash = argv[optindex++];
+ while (*slash && (*slash != '/')) {
+ slash++;
+ }
+ if (*slash) {
+ *slash++ = 0;
+ key = slash;
+ }
+ else {
+ key = 0;
+ }
+
+ const char *filename = 0;
+
+ while (optindex < argc) {
+ char *param = argv[optindex++];
+ if (!strncmp(param, FILENAME_PREFIX, FILENAME_PREFIX_LEN)) {
+ filename = &(param[FILENAME_PREFIX_LEN]);
+ }
+ else {
+ fprintf(stderr, "\nERROR: Unknown param: %s\n", param);
+ usageExit(stderr);
+ }
+ }
+
+ FILE *infile;
+
+ if (filename) {
+ if (!(infile = fopen(filename, "r" FOPEN_EXTRA_FLAGS))) {
+ fprintf(stderr, "\nERROR: Failed to open input file %s: ",
+ filename);
+ perror(0);
+ exit(-1);
+ }
+ }
+ else {
+ infile = stdin;
+ }
+
+ // Read in the complete ACL
+ char aclBuf[65536];
+ aclBuf[fread(aclBuf, 1, sizeof(aclBuf), infile)] = 0;
+ char ownerId[S3_MAX_GRANTEE_USER_ID_SIZE];
+ char ownerDisplayName[S3_MAX_GRANTEE_DISPLAY_NAME_SIZE];
+
+ // Parse it
+ int aclGrantCount;
+ S3AclGrant aclGrants[S3_MAX_ACL_GRANT_COUNT];
+ if (!convert_simple_acl(aclBuf, ownerId, ownerDisplayName,
+ &aclGrantCount, aclGrants)) {
+ fprintf(stderr, "\nERROR: Failed to parse ACLs\n");
+ fclose(infile);
+ exit(-1);
+ }
+
+ S3_init();
+
+ S3BucketContext bucketContext =
+ {
+ bucketName,
+ protocolG,
+ uriStyleG,
+ accessKeyIdG,
+ secretAccessKeyG
+ };
+
+ S3ResponseHandler responseHandler =
+ {
+ &responsePropertiesCallback,
+ &responseCompleteCallback
+ };
+
+ do {
+ S3_set_acl(&bucketContext, key, ownerId, ownerDisplayName,
+ aclGrantCount, aclGrants, 0, &responseHandler, 0);
+ } while (S3_status_is_retryable(statusG) && should_retry());
+
+ if (statusG != S3StatusOK) {
+ printError();
+ }
+
+ fclose(infile);
+
+ S3_deinitialize();
+}
+
+
+// get logging ----------------------------------------------------------------
+
+void get_logging(int argc, char **argv, int optindex)
+{
+ if (optindex == argc) {
+ fprintf(stderr, "\nERROR: Missing parameter: bucket\n");
+ usageExit(stderr);
+ }
+
+ const char *bucketName = argv[optindex++];
+ const char *filename = 0;
+
+ while (optindex < argc) {
+ char *param = argv[optindex++];
+ if (!strncmp(param, FILENAME_PREFIX, FILENAME_PREFIX_LEN)) {
+ filename = &(param[FILENAME_PREFIX_LEN]);
+ }
+ else {
+ fprintf(stderr, "\nERROR: Unknown param: %s\n", param);
+ usageExit(stderr);
+ }
+ }
+
+ FILE *outfile = 0;
+
+ if (filename) {
+ // Stat the file, and if it doesn't exist, open it in w mode
+ struct stat buf;
+ if (stat(filename, &buf) == -1) {
+ outfile = fopen(filename, "w" FOPEN_EXTRA_FLAGS);
+ }
+ else {
+ // Open in r+ so that we don't truncate the file, just in case
+ // there is an error and we write no bytes, we leave the file
+ // unmodified
+ outfile = fopen(filename, "r+" FOPEN_EXTRA_FLAGS);
+ }
+
+ if (!outfile) {
+ fprintf(stderr, "\nERROR: Failed to open output file %s: ",
+ filename);
+ perror(0);
+ exit(-1);
+ }
+ }
+ else if (showResponsePropertiesG) {
+ fprintf(stderr, "\nERROR: getlogging -s requires a filename "
+ "parameter\n");
+ usageExit(stderr);
+ }
+ else {
+ outfile = stdout;
+ }
+
+ int aclGrantCount;
+ S3AclGrant aclGrants[S3_MAX_ACL_GRANT_COUNT];
+ char targetBucket[S3_MAX_BUCKET_NAME_SIZE];
+ char targetPrefix[S3_MAX_KEY_SIZE];
+
+ S3_init();
+
+ S3BucketContext bucketContext =
+ {
+ bucketName,
+ protocolG,
+ uriStyleG,
+ accessKeyIdG,
+ secretAccessKeyG
+ };
+
+ S3ResponseHandler responseHandler =
+ {
+ &responsePropertiesCallback,
+ &responseCompleteCallback
+ };
+
+ do {
+ S3_get_server_access_logging(&bucketContext, targetBucket, targetPrefix,
+ &aclGrantCount, aclGrants, 0,
+ &responseHandler, 0);
+ } while (S3_status_is_retryable(statusG) && should_retry());
+
+ if (statusG == S3StatusOK) {
+ if (targetBucket[0]) {
+ printf("Target Bucket: %s\n", targetBucket);
+ if (targetPrefix[0]) {
+ printf("Target Prefix: %s\n", targetPrefix);
+ }
+ fprintf(outfile, "%-6s %-90s %-12s\n", " Type",
+ " User Identifier",
+ " Permission");
+ fprintf(outfile, "------ "
+ "---------------------------------------------------------"
+ "--------------------------------- ------------\n");
+ int i;
+ for (i = 0; i < aclGrantCount; i++) {
+ S3AclGrant *grant = &(aclGrants[i]);
+ const char *type;
+ char composedId[S3_MAX_GRANTEE_USER_ID_SIZE +
+ S3_MAX_GRANTEE_DISPLAY_NAME_SIZE + 16];
+ const char *id;
+
+ switch (grant->granteeType) {
+ case S3GranteeTypeAmazonCustomerByEmail:
+ type = "Email";
+ id = grant->grantee.amazonCustomerByEmail.emailAddress;
+ break;
+ case S3GranteeTypeCanonicalUser:
+ type = "UserID";
+ snprintf(composedId, sizeof(composedId),
+ "%s (%s)", grant->grantee.canonicalUser.id,
+ grant->grantee.canonicalUser.displayName);
+ id = composedId;
+ break;
+ case S3GranteeTypeAllAwsUsers:
+ type = "Group";
+ id = "Authenticated AWS Users";
+ break;
+ default:
+ type = "Group";
+ id = "All Users";
+ break;
+ }
+ const char *perm;
+ switch (grant->permission) {
+ case S3PermissionRead:
+ perm = "READ";
+ break;
+ case S3PermissionWrite:
+ perm = "WRITE";
+ break;
+ case S3PermissionReadACP:
+ perm = "READ_ACP";
+ break;
+ case S3PermissionWriteACP:
+ perm = "WRITE_ACP";
+ break;
+ default:
+ perm = "FULL_CONTROL";
+ break;
+ }
+ fprintf(outfile, "%-6s %-90s %-12s\n", type, id, perm);
+ }
+ }
+ else {
+ printf("Service logging is not enabled for this bucket.\n");
+ }
+ }
+ else {
+ printError();
+ }
+
+ fclose(outfile);
+
+ S3_deinitialize();
+}
+
+
+// set logging ----------------------------------------------------------------
+
+void set_logging(int argc, char **argv, int optindex)
+{
+ if (optindex == argc) {
+ fprintf(stderr, "\nERROR: Missing parameter: bucket\n");
+ usageExit(stderr);
+ }
+
+ const char *bucketName = argv[optindex++];
+
+ const char *targetBucket = 0, *targetPrefix = 0, *filename = 0;
+
+ while (optindex < argc) {
+ char *param = argv[optindex++];
+ if (!strncmp(param, TARGET_BUCKET_PREFIX, TARGET_BUCKET_PREFIX_LEN)) {
+ targetBucket = &(param[TARGET_BUCKET_PREFIX_LEN]);
+ }
+ else if (!strncmp(param, TARGET_PREFIX_PREFIX,
+ TARGET_PREFIX_PREFIX_LEN)) {
+ targetPrefix = &(param[TARGET_PREFIX_PREFIX_LEN]);
+ }
+ else if (!strncmp(param, FILENAME_PREFIX, FILENAME_PREFIX_LEN)) {
+ filename = &(param[FILENAME_PREFIX_LEN]);
+ }
+ else {
+ fprintf(stderr, "\nERROR: Unknown param: %s\n", param);
+ usageExit(stderr);
+ }
+ }
+
+ int aclGrantCount = 0;
+ S3AclGrant aclGrants[S3_MAX_ACL_GRANT_COUNT];
+
+ if (targetBucket) {
+ FILE *infile;
+
+ if (filename) {
+ if (!(infile = fopen(filename, "r" FOPEN_EXTRA_FLAGS))) {
+ fprintf(stderr, "\nERROR: Failed to open input file %s: ",
+ filename);
+ perror(0);
+ exit(-1);
+ }
+ }
+ else {
+ infile = stdin;
+ }
+
+ // Read in the complete ACL
+ char aclBuf[65536];
+ aclBuf[fread(aclBuf, 1, sizeof(aclBuf), infile)] = 0;
+ char ownerId[S3_MAX_GRANTEE_USER_ID_SIZE];
+ char ownerDisplayName[S3_MAX_GRANTEE_DISPLAY_NAME_SIZE];
+
+ // Parse it
+ if (!convert_simple_acl(aclBuf, ownerId, ownerDisplayName,
+ &aclGrantCount, aclGrants)) {
+ fprintf(stderr, "\nERROR: Failed to parse ACLs\n");
+ fclose(infile);
+ exit(-1);
+ }
+
+ fclose(infile);
+ }
+
+ S3_init();
+
+ S3BucketContext bucketContext =
+ {
+ bucketName,
+ protocolG,
+ uriStyleG,
+ accessKeyIdG,
+ secretAccessKeyG
+ };
+
+ S3ResponseHandler responseHandler =
+ {
+ &responsePropertiesCallback,
+ &responseCompleteCallback
+ };
+
+ do {
+ S3_set_server_access_logging(&bucketContext, targetBucket,
+ targetPrefix, aclGrantCount, aclGrants,
+ 0, &responseHandler, 0);
+ } while (S3_status_is_retryable(statusG) && should_retry());
+
+ if (statusG != S3StatusOK) {
+ printError();
+ }
+
+ S3_deinitialize();
+}
+
+
+// main ----------------------------------------------------------------------
+
+int main(int argc, char **argv)
+{
+ // Parse args
+ while (1) {
+ int idx = 0;
+ int c = getopt_long(argc, argv, "fhusr:", longOptionsG, &idx);
+
+ if (c == -1) {
+ // End of options
+ break;
+ }
+
+ switch (c) {
+ case 'f':
+ forceG = 1;
+ break;
+ case 'h':
+ uriStyleG = S3UriStyleVirtualHost;
+ break;
+ case 'u':
+ protocolG = S3ProtocolHTTP;
+ break;
+ case 's':
+ showResponsePropertiesG = 1;
+ break;
+ case 'r': {
+ const char *v = optarg;
+ while (*v) {
+ retriesG *= 10;
+ retriesG += *v - '0';
+ v++;
+ }
+ break;
+ }
+ default:
+ fprintf(stderr, "\nERROR: Unknown option: -%c\n", c);
+ // Usage exit
+ usageExit(stderr);
+ }
+ }
+
+ // The first non-option argument gives the operation to perform
+ if (optind == argc) {
+ fprintf(stderr, "\n\nERROR: Missing argument: command\n\n");
+ usageExit(stderr);
+ }
+
+ const char *command = argv[optind++];
+
+ if (!strcmp(command, "help")) {
+ fprintf(stdout, "\ns3 is a program for performing single requests "
+ "to Amazon S3.\n");
+ usageExit(stdout);
+ }
+
+ accessKeyIdG = getenv("S3_ACCESS_KEY_ID");
+ if (!accessKeyIdG) {
+ fprintf(stderr, "Missing environment variable: S3_ACCESS_KEY_ID\n");
+ return -1;
+ }
+ secretAccessKeyG = getenv("S3_SECRET_ACCESS_KEY");
+ if (!secretAccessKeyG) {
+ fprintf(stderr,
+ "Missing environment variable: S3_SECRET_ACCESS_KEY\n");
+ return -1;
+ }
+
+ if (!strcmp(command, "list")) {
+ list(argc, argv, optind);
+ }
+ else if (!strcmp(command, "test")) {
+ test_bucket(argc, argv, optind);
+ }
+ else if (!strcmp(command, "create")) {
+ create_bucket(argc, argv, optind);
+ }
+ else if (!strcmp(command, "delete")) {
+ if (optind == argc) {
+ fprintf(stderr,
+ "\nERROR: Missing parameter: bucket or bucket/key\n");
+ usageExit(stderr);
+ }
+ char *val = argv[optind];
+ int hasSlash = 0;
+ while (*val) {
+ if (*val++ == '/') {
+ hasSlash = 1;
+ break;
+ }
+ }
+ if (hasSlash) {
+ delete_object(argc, argv, optind);
+ }
+ else {
+ delete_bucket(argc, argv, optind);
+ }
+ }
+ else if (!strcmp(command, "put")) {
+ put_object(argc, argv, optind);
+ }
+ else if (!strcmp(command, "copy")) {
+ copy_object(argc, argv, optind);
+ }
+ else if (!strcmp(command, "get")) {
+ get_object(argc, argv, optind);
+ }
+ else if (!strcmp(command, "head")) {
+ head_object(argc, argv, optind);
+ }
+ else if (!strcmp(command, "gqs")) {
+ generate_query_string(argc, argv, optind);
+ }
+ else if (!strcmp(command, "getacl")) {
+ get_acl(argc, argv, optind);
+ }
+ else if (!strcmp(command, "setacl")) {
+ set_acl(argc, argv, optind);
+ }
+ else if (!strcmp(command, "getlogging")) {
+ get_logging(argc, argv, optind);
+ }
+ else if (!strcmp(command, "setlogging")) {
+ set_logging(argc, argv, optind);
+ }
+ else {
+ fprintf(stderr, "Unknown command: %s\n", command);
+ return -1;
+ }
+
+ return 0;
+}
--- /dev/null
+/** **************************************************************************
+ * service.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#include <ctype.h>
+#include <stdlib.h>
+#include <string.h>
+#include <time.h>
+#include "request.h"
+
+
+typedef struct XmlCallbackData
+{
+ SimpleXml simpleXml;
+
+ S3ResponsePropertiesCallback *responsePropertiesCallback;
+ S3ListServiceCallback *listServiceCallback;
+ S3ResponseCompleteCallback *responseCompleteCallback;
+ void *callbackData;
+
+ string_buffer(ownerId, 256);
+ string_buffer(ownerDisplayName, 256);
+ string_buffer(bucketName, 256);
+ string_buffer(creationDate, 128);
+} XmlCallbackData;
+
+
+static S3Status xmlCallback(const char *elementPath, const char *data,
+ int dataLen, void *callbackData)
+{
+ XmlCallbackData *cbData = (XmlCallbackData *) callbackData;
+
+ int fit;
+
+ if (data) {
+ if (!strcmp(elementPath, "ListAllMyBucketsResult/Owner/ID")) {
+ string_buffer_append(cbData->ownerId, data, dataLen, fit);
+ }
+ else if (!strcmp(elementPath,
+ "ListAllMyBucketsResult/Owner/DisplayName")) {
+ string_buffer_append(cbData->ownerDisplayName, data, dataLen, fit);
+ }
+ else if (!strcmp(elementPath,
+ "ListAllMyBucketsResult/Buckets/Bucket/Name")) {
+ string_buffer_append(cbData->bucketName, data, dataLen, fit);
+ }
+ else if (!strcmp
+ (elementPath,
+ "ListAllMyBucketsResult/Buckets/Bucket/CreationDate")) {
+ string_buffer_append(cbData->creationDate, data, dataLen, fit);
+ }
+ }
+ else {
+ if (!strcmp(elementPath, "ListAllMyBucketsResult/Buckets/Bucket")) {
+ // Parse date. Assume ISO-8601 date format.
+ time_t creationDate = parseIso8601Time(cbData->creationDate);
+
+ // Make the callback - a bucket just finished
+ S3Status status = (*(cbData->listServiceCallback))
+ (cbData->ownerId, cbData->ownerDisplayName,
+ cbData->bucketName, creationDate, cbData->callbackData);
+
+ string_buffer_initialize(cbData->bucketName);
+ string_buffer_initialize(cbData->creationDate);
+
+ return status;
+ }
+ }
+
+ return S3StatusOK;
+}
+
+
+static S3Status propertiesCallback
+ (const S3ResponseProperties *responseProperties, void *callbackData)
+{
+ XmlCallbackData *cbData = (XmlCallbackData *) callbackData;
+
+ return (*(cbData->responsePropertiesCallback))
+ (responseProperties, cbData->callbackData);
+}
+
+
+static S3Status dataCallback(int bufferSize, const char *buffer,
+ void *callbackData)
+{
+ XmlCallbackData *cbData = (XmlCallbackData *) callbackData;
+
+ return simplexml_add(&(cbData->simpleXml), buffer, bufferSize);
+}
+
+
+static void completeCallback(S3Status requestStatus,
+ const S3ErrorDetails *s3ErrorDetails,
+ void *callbackData)
+{
+ XmlCallbackData *cbData = (XmlCallbackData *) callbackData;
+
+ (*(cbData->responseCompleteCallback))
+ (requestStatus, s3ErrorDetails, cbData->callbackData);
+
+ simplexml_deinitialize(&(cbData->simpleXml));
+
+ free(cbData);
+}
+
+
+void S3_list_service(S3Protocol protocol, const char *accessKeyId,
+ const char *secretAccessKey,
+ S3RequestContext *requestContext,
+ const S3ListServiceHandler *handler, void *callbackData)
+{
+ // Create and set up the callback data
+ XmlCallbackData *data =
+ (XmlCallbackData *) malloc(sizeof(XmlCallbackData));
+ if (!data) {
+ (*(handler->responseHandler.completeCallback))
+ (S3StatusOutOfMemory, 0, callbackData);
+ return;
+ }
+
+ simplexml_initialize(&(data->simpleXml), &xmlCallback, data);
+
+ data->responsePropertiesCallback =
+ handler->responseHandler.propertiesCallback;
+ data->listServiceCallback = handler->listServiceCallback;
+ data->responseCompleteCallback = handler->responseHandler.completeCallback;
+ data->callbackData = callbackData;
+
+ string_buffer_initialize(data->ownerId);
+ string_buffer_initialize(data->ownerDisplayName);
+ string_buffer_initialize(data->bucketName);
+ string_buffer_initialize(data->creationDate);
+
+ // Set up the RequestParams
+ RequestParams params =
+ {
+ HttpRequestTypeGET, // httpRequestType
+ { 0, // bucketName
+ protocol, // protocol
+ S3UriStylePath, // uriStyle
+ accessKeyId, // accessKeyId
+ secretAccessKey }, // secretAccessKey
+ 0, // key
+ 0, // queryParams
+ 0, // subResource
+ 0, // copySourceBucketName
+ 0, // copySourceKey
+ 0, // getConditions
+ 0, // startByte
+ 0, // byteCount
+ 0, // requestProperties
+ &propertiesCallback, // propertiesCallback
+ 0, // toS3Callback
+ 0, // toS3CallbackTotalSize
+ &dataCallback, // fromS3Callback
+ &completeCallback, // completeCallback
+ data // callbackData
+ };
+
+ // Perform the request
+ request_perform(¶ms, requestContext);
+}
+
+
--- /dev/null
+/** **************************************************************************
+ * server_access_logging.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#include <stdlib.h>
+#include <string.h>
+#include "libs3.h"
+#include "request.h"
+
+
+// get server access logging---------------------------------------------------
+
+typedef struct ConvertBlsData
+{
+ char *targetBucketReturn;
+ int targetBucketReturnLen;
+ char *targetPrefixReturn;
+ int targetPrefixReturnLen;
+ int *aclGrantCountReturn;
+ S3AclGrant *aclGrants;
+
+ string_buffer(emailAddress, S3_MAX_GRANTEE_EMAIL_ADDRESS_SIZE);
+ string_buffer(userId, S3_MAX_GRANTEE_USER_ID_SIZE);
+ string_buffer(userDisplayName, S3_MAX_GRANTEE_DISPLAY_NAME_SIZE);
+ string_buffer(groupUri, 128);
+ string_buffer(permission, 32);
+} ConvertBlsData;
+
+
+static S3Status convertBlsXmlCallback(const char *elementPath,
+ const char *data, int dataLen,
+ void *callbackData)
+{
+ ConvertBlsData *caData = (ConvertBlsData *) callbackData;
+
+ int fit;
+
+ if (data) {
+ if (!strcmp(elementPath, "BucketLoggingStatus/LoggingEnabled/"
+ "TargetBucket")) {
+ caData->targetBucketReturnLen +=
+ snprintf(&(caData->targetBucketReturn
+ [caData->targetBucketReturnLen]),
+ 255 - caData->targetBucketReturnLen - 1,
+ "%.*s", dataLen, data);
+ if (caData->targetBucketReturnLen >= 255) {
+ return S3StatusTargetBucketTooLong;
+ }
+ }
+ else if (!strcmp(elementPath, "BucketLoggingStatus/LoggingEnabled/"
+ "TargetPrefix")) {
+ caData->targetPrefixReturnLen +=
+ snprintf(&(caData->targetPrefixReturn
+ [caData->targetPrefixReturnLen]),
+ 255 - caData->targetPrefixReturnLen - 1,
+ "%.*s", dataLen, data);
+ if (caData->targetPrefixReturnLen >= 255) {
+ return S3StatusTargetPrefixTooLong;
+ }
+ }
+ else if (!strcmp(elementPath, "BucketLoggingStatus/LoggingEnabled/"
+ "TargetGrants/Grant/Grantee/EmailAddress")) {
+ // AmazonCustomerByEmail
+ string_buffer_append(caData->emailAddress, data, dataLen, fit);
+ if (!fit) {
+ return S3StatusEmailAddressTooLong;
+ }
+ }
+ else if (!strcmp(elementPath,
+ "AccessControlPolicy/AccessControlList/Grant/"
+ "Grantee/ID")) {
+ // CanonicalUser
+ string_buffer_append(caData->userId, data, dataLen, fit);
+ if (!fit) {
+ return S3StatusUserIdTooLong;
+ }
+ }
+ else if (!strcmp(elementPath, "BucketLoggingStatus/LoggingEnabled/"
+ "TargetGrants/Grant/Grantee/DisplayName")) {
+ // CanonicalUser
+ string_buffer_append(caData->userDisplayName, data, dataLen, fit);
+ if (!fit) {
+ return S3StatusUserDisplayNameTooLong;
+ }
+ }
+ else if (!strcmp(elementPath, "BucketLoggingStatus/LoggingEnabled/"
+ "TargetGrants/Grant/Grantee/URI")) {
+ // Group
+ string_buffer_append(caData->groupUri, data, dataLen, fit);
+ if (!fit) {
+ return S3StatusGroupUriTooLong;
+ }
+ }
+ else if (!strcmp(elementPath, "BucketLoggingStatus/LoggingEnabled/"
+ "TargetGrants/Grant/Permission")) {
+ // Permission
+ string_buffer_append(caData->permission, data, dataLen, fit);
+ if (!fit) {
+ return S3StatusPermissionTooLong;
+ }
+ }
+ }
+ else {
+ if (!strcmp(elementPath, "BucketLoggingStatus/LoggingEnabled/"
+ "TargetGrants/Grant")) {
+ // A grant has just been completed; so add the next S3AclGrant
+ // based on the values read
+ if (*(caData->aclGrantCountReturn) == S3_MAX_ACL_GRANT_COUNT) {
+ return S3StatusTooManyGrants;
+ }
+
+ S3AclGrant *grant = &(caData->aclGrants
+ [*(caData->aclGrantCountReturn)]);
+
+ if (caData->emailAddress[0]) {
+ grant->granteeType = S3GranteeTypeAmazonCustomerByEmail;
+ strcpy(grant->grantee.amazonCustomerByEmail.emailAddress,
+ caData->emailAddress);
+ }
+ else if (caData->userId[0] && caData->userDisplayName[0]) {
+ grant->granteeType = S3GranteeTypeCanonicalUser;
+ strcpy(grant->grantee.canonicalUser.id, caData->userId);
+ strcpy(grant->grantee.canonicalUser.displayName,
+ caData->userDisplayName);
+ }
+ else if (caData->groupUri[0]) {
+ if (!strcmp(caData->groupUri,
+ "http://acs.amazonaws.com/groups/global/"
+ "AuthenticatedUsers")) {
+ grant->granteeType = S3GranteeTypeAllAwsUsers;
+ }
+ else if (!strcmp(caData->groupUri,
+ "http://acs.amazonaws.com/groups/global/"
+ "AllUsers")) {
+ grant->granteeType = S3GranteeTypeAllUsers;
+ }
+ else {
+ return S3StatusBadGrantee;
+ }
+ }
+ else {
+ return S3StatusBadGrantee;
+ }
+
+ if (!strcmp(caData->permission, "READ")) {
+ grant->permission = S3PermissionRead;
+ }
+ else if (!strcmp(caData->permission, "WRITE")) {
+ grant->permission = S3PermissionWrite;
+ }
+ else if (!strcmp(caData->permission, "READ_ACP")) {
+ grant->permission = S3PermissionReadACP;
+ }
+ else if (!strcmp(caData->permission, "WRITE_ACP")) {
+ grant->permission = S3PermissionWriteACP;
+ }
+ else if (!strcmp(caData->permission, "FULL_CONTROL")) {
+ grant->permission = S3PermissionFullControl;
+ }
+ else {
+ return S3StatusBadPermission;
+ }
+
+ (*(caData->aclGrantCountReturn))++;
+
+ string_buffer_initialize(caData->emailAddress);
+ string_buffer_initialize(caData->userId);
+ string_buffer_initialize(caData->userDisplayName);
+ string_buffer_initialize(caData->groupUri);
+ string_buffer_initialize(caData->permission);
+ }
+ }
+
+ return S3StatusOK;
+}
+
+
+static S3Status convert_bls(char *blsXml, char *targetBucketReturn,
+ char *targetPrefixReturn, int *aclGrantCountReturn,
+ S3AclGrant *aclGrants)
+{
+ ConvertBlsData data;
+
+ data.targetBucketReturn = targetBucketReturn;
+ data.targetBucketReturn[0] = 0;
+ data.targetBucketReturnLen = 0;
+ data.targetPrefixReturn = targetPrefixReturn;
+ data.targetPrefixReturn[0] = 0;
+ data.targetPrefixReturnLen = 0;
+ data.aclGrantCountReturn = aclGrantCountReturn;
+ data.aclGrants = aclGrants;
+ *aclGrantCountReturn = 0;
+ string_buffer_initialize(data.emailAddress);
+ string_buffer_initialize(data.userId);
+ string_buffer_initialize(data.userDisplayName);
+ string_buffer_initialize(data.groupUri);
+ string_buffer_initialize(data.permission);
+
+ // Use a simplexml parser
+ SimpleXml simpleXml;
+ simplexml_initialize(&simpleXml, &convertBlsXmlCallback, &data);
+
+ S3Status status = simplexml_add(&simpleXml, blsXml, strlen(blsXml));
+
+ simplexml_deinitialize(&simpleXml);
+
+ return status;
+}
+
+
+// Use a rather arbitrary max size for the document of 64K
+#define BLS_XML_DOC_MAXSIZE (64 * 1024)
+
+
+typedef struct GetBlsData
+{
+ SimpleXml simpleXml;
+
+ S3ResponsePropertiesCallback *responsePropertiesCallback;
+ S3ResponseCompleteCallback *responseCompleteCallback;
+ void *callbackData;
+
+ char *targetBucketReturn;
+ char *targetPrefixReturn;
+ int *aclGrantCountReturn;
+ S3AclGrant *aclGrants;
+ string_buffer(blsXmlDocument, BLS_XML_DOC_MAXSIZE);
+} GetBlsData;
+
+
+static S3Status getBlsPropertiesCallback
+ (const S3ResponseProperties *responseProperties, void *callbackData)
+{
+ GetBlsData *gsData = (GetBlsData *) callbackData;
+
+ return (*(gsData->responsePropertiesCallback))
+ (responseProperties, gsData->callbackData);
+}
+
+
+static S3Status getBlsDataCallback(int bufferSize, const char *buffer,
+ void *callbackData)
+{
+ GetBlsData *gsData = (GetBlsData *) callbackData;
+
+ int fit;
+
+ string_buffer_append(gsData->blsXmlDocument, buffer, bufferSize, fit);
+
+ return fit ? S3StatusOK : S3StatusXmlDocumentTooLarge;
+}
+
+
+static void getBlsCompleteCallback(S3Status requestStatus,
+ const S3ErrorDetails *s3ErrorDetails,
+ void *callbackData)
+{
+ GetBlsData *gsData = (GetBlsData *) callbackData;
+
+ if (requestStatus == S3StatusOK) {
+ // Parse the document
+ requestStatus = convert_bls
+ (gsData->blsXmlDocument, gsData->targetBucketReturn,
+ gsData->targetPrefixReturn, gsData->aclGrantCountReturn,
+ gsData->aclGrants);
+ }
+
+ (*(gsData->responseCompleteCallback))
+ (requestStatus, s3ErrorDetails, gsData->callbackData);
+
+ free(gsData);
+}
+
+
+void S3_get_server_access_logging(const S3BucketContext *bucketContext,
+ char *targetBucketReturn,
+ char *targetPrefixReturn,
+ int *aclGrantCountReturn,
+ S3AclGrant *aclGrants,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler,
+ void *callbackData)
+{
+ // Create the callback data
+ GetBlsData *gsData = (GetBlsData *) malloc(sizeof(GetBlsData));
+ if (!gsData) {
+ (*(handler->completeCallback))(S3StatusOutOfMemory, 0, callbackData);
+ return;
+ }
+
+ gsData->responsePropertiesCallback = handler->propertiesCallback;
+ gsData->responseCompleteCallback = handler->completeCallback;
+ gsData->callbackData = callbackData;
+
+ gsData->targetBucketReturn = targetBucketReturn;
+ gsData->targetPrefixReturn = targetPrefixReturn;
+ gsData->aclGrantCountReturn = aclGrantCountReturn;
+ gsData->aclGrants = aclGrants;
+ string_buffer_initialize(gsData->blsXmlDocument);
+ *aclGrantCountReturn = 0;
+
+ // Set up the RequestParams
+ RequestParams params =
+ {
+ HttpRequestTypeGET, // httpRequestType
+ { bucketContext->bucketName, // bucketName
+ bucketContext->protocol, // protocol
+ bucketContext->uriStyle, // uriStyle
+ bucketContext->accessKeyId, // accessKeyId
+ bucketContext->secretAccessKey }, // secretAccessKey
+ 0, // key
+ 0, // queryParams
+ "logging", // subResource
+ 0, // copySourceBucketName
+ 0, // copySourceKey
+ 0, // getConditions
+ 0, // startByte
+ 0, // byteCount
+ 0, // putProperties
+ &getBlsPropertiesCallback, // propertiesCallback
+ 0, // toS3Callback
+ 0, // toS3CallbackTotalSize
+ &getBlsDataCallback, // fromS3Callback
+ &getBlsCompleteCallback, // completeCallback
+ gsData // callbackData
+ };
+
+ // Perform the request
+ request_perform(¶ms, requestContext);
+}
+
+
+
+// set server access logging---------------------------------------------------
+
+static S3Status generateSalXmlDocument(const char *targetBucket,
+ const char *targetPrefix,
+ int aclGrantCount,
+ const S3AclGrant *aclGrants,
+ int *xmlDocumentLenReturn,
+ char *xmlDocument,
+ int xmlDocumentBufferSize)
+{
+ *xmlDocumentLenReturn = 0;
+
+#define append(fmt, ...) \
+ do { \
+ *xmlDocumentLenReturn += snprintf \
+ (&(xmlDocument[*xmlDocumentLenReturn]), \
+ xmlDocumentBufferSize - *xmlDocumentLenReturn - 1, \
+ fmt, __VA_ARGS__); \
+ if (*xmlDocumentLenReturn >= xmlDocumentBufferSize) { \
+ return S3StatusXmlDocumentTooLarge; \
+ } \
+ } while (0)
+
+ append("%s", "<BucketLoggingStatus "
+ "xmlns=\"http://doc.s3.amazonaws.com/2006-03-01\">");
+
+ if (targetBucket && targetBucket[0]) {
+ append("<LoggingEnabled><TargetBucket>%s</TargetBucket>", targetBucket);
+ append("<TargetPrefix>%s</TargetPrefix>",
+ targetPrefix ? targetPrefix : "");
+
+ if (aclGrantCount) {
+ append("%s", "<TargetGrants>");
+ int i;
+ for (i = 0; i < aclGrantCount; i++) {
+ append("%s", "<Grant><Grantee "
+ "xmlns:xsi=\"http://www.w3.org/2001/"
+ "XMLSchema-instance\" xsi:type=\"");
+ const S3AclGrant *grant = &(aclGrants[i]);
+ switch (grant->granteeType) {
+ case S3GranteeTypeAmazonCustomerByEmail:
+ append("AmazonCustomerByEmail\"><EmailAddress>%s"
+ "</EmailAddress>",
+ grant->grantee.amazonCustomerByEmail.emailAddress);
+ break;
+ case S3GranteeTypeCanonicalUser:
+ append("CanonicalUser\"><ID>%s</ID><DisplayName>%s"
+ "</DisplayName>",
+ grant->grantee.canonicalUser.id,
+ grant->grantee.canonicalUser.displayName);
+ break;
+ default: // case S3GranteeTypeAllAwsUsers/S3GranteeTypeAllUsers:
+ append("Group\"><URI>http://acs.amazonaws.com/groups/"
+ "global/%s</URI>",
+ (grant->granteeType == S3GranteeTypeAllAwsUsers) ?
+ "AuthenticatedUsers" : "AllUsers");
+ break;
+ }
+ append("</Grantee><Permission>%s</Permission></Grant>",
+ ((grant->permission == S3PermissionRead) ? "READ" :
+ (grant->permission == S3PermissionWrite) ? "WRITE" :
+ (grant->permission ==
+ S3PermissionReadACP) ? "READ_ACP" :
+ (grant->permission ==
+ S3PermissionWriteACP) ? "WRITE_ACP" : "FULL_CONTROL"));
+ }
+ append("%s", "</TargetGrants>");
+ }
+ append("%s", "</LoggingEnabled>");
+ }
+
+ append("%s", "</BucketLoggingStatus>");
+
+ return S3StatusOK;
+}
+
+
+typedef struct SetSalData
+{
+ S3ResponsePropertiesCallback *responsePropertiesCallback;
+ S3ResponseCompleteCallback *responseCompleteCallback;
+ void *callbackData;
+
+ int salXmlDocumentLen;
+ char salXmlDocument[BLS_XML_DOC_MAXSIZE];
+ int salXmlDocumentBytesWritten;
+
+} SetSalData;
+
+
+static S3Status setSalPropertiesCallback
+ (const S3ResponseProperties *responseProperties, void *callbackData)
+{
+ SetSalData *paData = (SetSalData *) callbackData;
+
+ return (*(paData->responsePropertiesCallback))
+ (responseProperties, paData->callbackData);
+}
+
+
+static int setSalDataCallback(int bufferSize, char *buffer, void *callbackData)
+{
+ SetSalData *paData = (SetSalData *) callbackData;
+
+ int remaining = (paData->salXmlDocumentLen -
+ paData->salXmlDocumentBytesWritten);
+
+ int toCopy = bufferSize > remaining ? remaining : bufferSize;
+
+ if (!toCopy) {
+ return 0;
+ }
+
+ memcpy(buffer, &(paData->salXmlDocument
+ [paData->salXmlDocumentBytesWritten]), toCopy);
+
+ paData->salXmlDocumentBytesWritten += toCopy;
+
+ return toCopy;
+}
+
+
+static void setSalCompleteCallback(S3Status requestStatus,
+ const S3ErrorDetails *s3ErrorDetails,
+ void *callbackData)
+{
+ SetSalData *paData = (SetSalData *) callbackData;
+
+ (*(paData->responseCompleteCallback))
+ (requestStatus, s3ErrorDetails, paData->callbackData);
+
+ free(paData);
+}
+
+
+void S3_set_server_access_logging(const S3BucketContext *bucketContext,
+ const char *targetBucket,
+ const char *targetPrefix, int aclGrantCount,
+ const S3AclGrant *aclGrants,
+ S3RequestContext *requestContext,
+ const S3ResponseHandler *handler,
+ void *callbackData)
+{
+ if (aclGrantCount > S3_MAX_ACL_GRANT_COUNT) {
+ (*(handler->completeCallback))
+ (S3StatusTooManyGrants, 0, callbackData);
+ return;
+ }
+
+ SetSalData *data = (SetSalData *) malloc(sizeof(SetSalData));
+ if (!data) {
+ (*(handler->completeCallback))(S3StatusOutOfMemory, 0, callbackData);
+ return;
+ }
+
+ // Convert aclGrants to XML document
+ S3Status status = generateSalXmlDocument
+ (targetBucket, targetPrefix, aclGrantCount, aclGrants,
+ &(data->salXmlDocumentLen), data->salXmlDocument,
+ sizeof(data->salXmlDocument));
+ if (status != S3StatusOK) {
+ free(data);
+ (*(handler->completeCallback))(status, 0, callbackData);
+ return;
+ }
+
+ data->responsePropertiesCallback = handler->propertiesCallback;
+ data->responseCompleteCallback = handler->completeCallback;
+ data->callbackData = callbackData;
+
+ data->salXmlDocumentBytesWritten = 0;
+
+ // Set up the RequestParams
+ RequestParams params =
+ {
+ HttpRequestTypePUT, // httpRequestType
+ { bucketContext->bucketName, // bucketName
+ bucketContext->protocol, // protocol
+ bucketContext->uriStyle, // uriStyle
+ bucketContext->accessKeyId, // accessKeyId
+ bucketContext->secretAccessKey }, // secretAccessKey
+ 0, // key
+ 0, // queryParams
+ "logging", // subResource
+ 0, // copySourceBucketName
+ 0, // copySourceKey
+ 0, // getConditions
+ 0, // startByte
+ 0, // byteCount
+ 0, // putProperties
+ &setSalPropertiesCallback, // propertiesCallback
+ &setSalDataCallback, // toS3Callback
+ data->salXmlDocumentLen, // toS3CallbackTotalSize
+ 0, // fromS3Callback
+ &setSalCompleteCallback, // completeCallback
+ data // callbackData
+ };
+
+ // Perform the request
+ request_perform(¶ms, requestContext);
+}
--- /dev/null
+/** **************************************************************************
+ * simplexml.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#include <libxml/parser.h>
+#include <string.h>
+#include "simplexml.h"
+
+// Use libxml2 for parsing XML. XML is severely overused in modern
+// computing. It is useful for only a very small subset of tasks, but
+// software developers who don't know better and are afraid to go against the
+// grain use it for everything, and in most cases, it is completely
+// inappropriate. Usually, the document structure is severely under-specified
+// as well, as is the case with S3. We do our best by just caring about the
+// most important aspects of the S3 "XML document" responses: the elements and
+// their values. The SAX API (just about the lamest API ever devised and
+// proof that XML sucks - well, the real proof is how crappy all of the XML
+// parsing libraries are, including libxml2 - but I digress) is used here
+// because we don't need much from the parser and SAX is fast and low memory.
+//
+// Note that for simplicity we assume all ASCII here. No attempts are made to
+// detect non-ASCII sequences in utf-8 and convert them into ASCII in any way.
+// S3 appears to only use ASCII anyway.
+
+
+static xmlEntityPtr saxGetEntity(void *user_data, const xmlChar *name)
+{
+ (void) user_data;
+
+ return xmlGetPredefinedEntity(name);
+}
+
+
+static void saxStartElement(void *user_data, const xmlChar *nameUtf8,
+ const xmlChar **attr)
+{
+ (void) attr;
+
+ SimpleXml *simpleXml = (SimpleXml *) user_data;
+
+ if (simpleXml->status != S3StatusOK) {
+ return;
+ }
+
+ // Assume that name has no non-ASCII in it
+ char *name = (char *) nameUtf8;
+
+ // Append the element to the element path
+ int len = strlen(name);
+
+ if ((simpleXml->elementPathLen + len + 1) >=
+ (int) sizeof(simpleXml->elementPath)) {
+ // Cannot handle this element, stop!
+ simpleXml->status = S3StatusXmlParseFailure;
+ return;
+ }
+
+ if (simpleXml->elementPathLen) {
+ simpleXml->elementPath[simpleXml->elementPathLen++] = '/';
+ }
+ strcpy(&(simpleXml->elementPath[simpleXml->elementPathLen]), name);
+ simpleXml->elementPathLen += len;
+}
+
+
+static void saxEndElement(void *user_data, const xmlChar *name)
+{
+ (void) name;
+
+ SimpleXml *simpleXml = (SimpleXml *) user_data;
+
+ if (simpleXml->status != S3StatusOK) {
+ return;
+ }
+
+ // Call back with 0 data
+ simpleXml->status = (*(simpleXml->callback))
+ (simpleXml->elementPath, 0, 0, simpleXml->callbackData);
+
+ while ((simpleXml->elementPathLen > 0) &&
+ (simpleXml->elementPath[simpleXml->elementPathLen] != '/')) {
+ simpleXml->elementPathLen--;
+ }
+
+ simpleXml->elementPath[simpleXml->elementPathLen] = 0;
+}
+
+
+static void saxCharacters(void *user_data, const xmlChar *ch, int len)
+{
+ SimpleXml *simpleXml = (SimpleXml *) user_data;
+
+ if (simpleXml->status != S3StatusOK) {
+ return;
+ }
+
+ simpleXml->status = (*(simpleXml->callback))
+ (simpleXml->elementPath, (char *) ch, len, simpleXml->callbackData);
+}
+
+
+static void saxError(void *user_data, const char *msg, ...)
+{
+ (void) msg;
+
+ SimpleXml *simpleXml = (SimpleXml *) user_data;
+
+ if (simpleXml->status != S3StatusOK) {
+ return;
+ }
+
+ simpleXml->status = S3StatusXmlParseFailure;
+}
+
+
+static struct _xmlSAXHandler saxHandlerG =
+{
+ 0, // internalSubsetSAXFunc
+ 0, // isStandaloneSAXFunc
+ 0, // hasInternalSubsetSAXFunc
+ 0, // hasExternalSubsetSAXFunc
+ 0, // resolveEntitySAXFunc
+ &saxGetEntity, // getEntitySAXFunc
+ 0, // entityDeclSAXFunc
+ 0, // notationDeclSAXFunc
+ 0, // attributeDeclSAXFunc
+ 0, // elementDeclSAXFunc
+ 0, // unparsedEntityDeclSAXFunc
+ 0, // setDocumentLocatorSAXFunc
+ 0, // startDocumentSAXFunc
+ 0, // endDocumentSAXFunc
+ &saxStartElement, // startElementSAXFunc
+ &saxEndElement, // endElementSAXFunc
+ 0, // referenceSAXFunc
+ &saxCharacters, // charactersSAXFunc
+ 0, // ignorableWhitespaceSAXFunc
+ 0, // processingInstructionSAXFunc
+ 0, // commentSAXFunc
+ 0, // warningSAXFunc
+ &saxError, // errorSAXFunc
+ &saxError, // fatalErrorSAXFunc
+ 0, // getParameterEntitySAXFunc
+ &saxCharacters, // cdataBlockSAXFunc
+ 0, // externalSubsetSAXFunc
+ 0, // initialized
+ 0, // _private
+ 0, // startElementNsSAX2Func
+ 0, // endElementNsSAX2Func
+ 0 // xmlStructuredErrorFunc serror;
+};
+
+void simplexml_initialize(SimpleXml *simpleXml,
+ SimpleXmlCallback *callback, void *callbackData)
+{
+ simpleXml->callback = callback;
+ simpleXml->callbackData = callbackData;
+ simpleXml->elementPathLen = 0;
+ simpleXml->status = S3StatusOK;
+ simpleXml->xmlParser = 0;
+}
+
+
+void simplexml_deinitialize(SimpleXml *simpleXml)
+{
+ if (simpleXml->xmlParser) {
+ xmlFreeParserCtxt(simpleXml->xmlParser);
+ }
+}
+
+
+S3Status simplexml_add(SimpleXml *simpleXml, const char *data, int dataLen)
+{
+ if (!simpleXml->xmlParser &&
+ (!(simpleXml->xmlParser = xmlCreatePushParserCtxt
+ (&saxHandlerG, simpleXml, 0, 0, 0)))) {
+ return S3StatusInternalError;
+ }
+
+ if (xmlParseChunk((xmlParserCtxtPtr) simpleXml->xmlParser,
+ data, dataLen, 0)) {
+ return S3StatusXmlParseFailure;
+ }
+
+ return simpleXml->status;
+}
--- /dev/null
+/** **************************************************************************
+ * testsimplexml.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <time.h>
+#include "simplexml.h"
+
+static S3Status simpleXmlCallback(const char *elementPath, const char *data,
+ int dataLen, void *callbackData)
+{
+ (void) callbackData;
+
+ printf("[%s]: [%.*s]\n", elementPath, dataLen, data);
+
+ return S3StatusOK;
+}
+
+
+// The only argument allowed is a specification of the random seed to use
+int main(int argc, char **argv)
+{
+ if (argc > 1) {
+ char *arg = argv[1];
+ int seed = 0;
+ while (*arg) {
+ seed *= 10;
+ seed += (*arg++ - '0');
+ }
+
+ srand(seed);
+ }
+ else {
+ srand(time(0));
+ }
+
+ SimpleXml simpleXml;
+
+ simplexml_initialize(&simpleXml, &simpleXmlCallback, 0);
+
+ // Read chunks of 10K from stdin, and then feed them in random chunks
+ // to simplexml_add
+ char inbuf[10000];
+
+ int amt_read;
+ while ((amt_read = fread(inbuf, 1, sizeof(inbuf), stdin)) > 0) {
+ char *buf = inbuf;
+ while (amt_read) {
+ int amt = (rand() % amt_read) + 1;
+ S3Status status = simplexml_add(&simpleXml, buf, amt);
+ if (status != S3StatusOK) {
+ fprintf(stderr, "ERROR: Parse failure: %d\n", status);
+ simplexml_deinitialize(&simpleXml);
+ return -1;
+ }
+ buf += amt, amt_read -= amt;
+ }
+ }
+
+ simplexml_deinitialize(&simpleXml);
+
+ return 0;
+}
--- /dev/null
+/** **************************************************************************
+ * util.c
+ *
+ * Copyright 2008 Bryan Ischo <bryan@ischo.com>
+ *
+ * This file is part of libs3.
+ *
+ * libs3 is free software: you can redistribute it and/or modify it under the
+ * terms of the GNU General Public License as published by the Free Software
+ * Foundation, version 3 of the License.
+ *
+ * In addition, as a special exception, the copyright holders give
+ * permission to link the code of this library and its programs with the
+ * OpenSSL library, and distribute linked combinations including the two.
+ *
+ * libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License version 3
+ * along with libs3, in a file named COPYING. If not, see
+ * <http://www.gnu.org/licenses/>.
+ *
+ ************************************************************************** **/
+
+#include <ctype.h>
+#include <string.h>
+#include "util.h"
+
+
+// Convenience utility for making the code look nicer. Tests a string
+// against a format; only the characters specified in the format are
+// checked (i.e. if the string is longer than the format, the string still
+// checks out ok). Format characters are:
+// d - is a digit
+// anything else - is that character
+// Returns nonzero the string checks out, zero if it does not.
+static int checkString(const char *str, const char *format)
+{
+ while (*format) {
+ if (*format == 'd') {
+ if (!isdigit(*str)) {
+ return 0;
+ }
+ }
+ else if (*str != *format) {
+ return 0;
+ }
+ str++, format++;
+ }
+
+ return 1;
+}
+
+
+int urlEncode(char *dest, const char *src, int maxSrcSize)
+{
+ static const char *urlSafe = "-_.!~*'()/";
+ static const char *hex = "0123456789ABCDEF";
+
+ int len = 0;
+
+ if (src) while (*src) {
+ if (++len > maxSrcSize) {
+ *dest = 0;
+ return 0;
+ }
+ const char *urlsafe = urlSafe;
+ int isurlsafe = 0;
+ while (*urlsafe) {
+ if (*urlsafe == *src) {
+ isurlsafe = 1;
+ break;
+ }
+ urlsafe++;
+ }
+ if (isurlsafe || isalnum(*src)) {
+ *dest++ = *src++;
+ }
+ else if (*src == ' ') {
+ *dest++ = '+';
+ src++;
+ }
+ else {
+ *dest++ = '%';
+ *dest++ = hex[*src / 16];
+ *dest++ = hex[*src % 16];
+ src++;
+ }
+ }
+
+ *dest = 0;
+
+ return 1;
+}
+
+
+int64_t parseIso8601Time(const char *str)
+{
+ // Check to make sure that it has a valid format
+ if (!checkString(str, "dddd-dd-ddTdd:dd:dd")) {
+ return -1;
+ }
+
+#define nextnum() (((*str - '0') * 10) + (*(str + 1) - '0'))
+
+ // Convert it
+ struct tm stm;
+ memset(&stm, 0, sizeof(stm));
+
+ stm.tm_year = (nextnum() - 19) * 100;
+ str += 2;
+ stm.tm_year += nextnum();
+ str += 3;
+
+ stm.tm_mon = nextnum() - 1;
+ str += 3;
+
+ stm.tm_mday = nextnum();
+ str += 3;
+
+ stm.tm_hour = nextnum();
+ str += 3;
+
+ stm.tm_min = nextnum();
+ str += 3;
+
+ stm.tm_sec = nextnum();
+ str += 2;
+
+ stm.tm_isdst = -1;
+
+ int64_t ret = mktime(&stm);
+
+ // Skip the millis
+
+ if (*str == '.') {
+ str++;
+ while (isdigit(*str)) {
+ str++;
+ }
+ }
+
+ if (checkString(str, "-dd:dd") || checkString(str, "+dd:dd")) {
+ int sign = (*str++ == '-') ? -1 : 1;
+ int hours = nextnum();
+ str += 3;
+ int minutes = nextnum();
+ ret += (-sign * (((hours * 60) + minutes) * 60));
+ }
+ // Else it should be Z to be a conformant time string, but we just assume
+ // that it is rather than enforcing that
+
+ return ret;
+}
+
+
+uint64_t parseUnsignedInt(const char *str)
+{
+ // Skip whitespace
+ while (is_blank(*str)) {
+ str++;
+ }
+
+ uint64_t ret = 0;
+
+ while (isdigit(*str)) {
+ ret *= 10;
+ ret += (*str++ - '0');
+ }
+
+ return ret;
+}
+
+
+int base64Encode(const unsigned char *in, int inLen, char *out)
+{
+ static const char *ENC =
+ "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
+
+ char *original_out = out;
+
+ while (inLen) {
+ // first 6 bits of char 1
+ *out++ = ENC[*in >> 2];
+ if (!--inLen) {
+ // last 2 bits of char 1, 4 bits of 0
+ *out++ = ENC[(*in & 0x3) << 4];
+ *out++ = '=';
+ *out++ = '=';
+ break;
+ }
+ // last 2 bits of char 1, first 4 bits of char 2
+ *out++ = ENC[((*in & 0x3) << 4) | (*(in + 1) >> 4)];
+ in++;
+ if (!--inLen) {
+ // last 4 bits of char 2, 2 bits of 0
+ *out++ = ENC[(*in & 0xF) << 2];
+ *out++ = '=';
+ break;
+ }
+ // last 4 bits of char 2, first 2 bits of char 3
+ *out++ = ENC[((*in & 0xF) << 2) | (*(in + 1) >> 6)];
+ in++;
+ // last 6 bits of char 3
+ *out++ = ENC[*in & 0x3F];
+ in++, inLen--;
+ }
+
+ return (out - original_out);
+}
+
+
+#define rol(value, bits) (((value) << (bits)) | ((value) >> (32 - (bits))))
+
+#define blk0L(i) (block->l[i] = (rol(block->l[i], 24) & 0xFF00FF00) \
+ | (rol(block->l[i], 8) & 0x00FF00FF))
+
+#define blk0B(i) (block->l[i])
+
+#define blk(i) (block->l[i & 15] = rol(block->l[(i + 13) & 15] ^ \
+ block->l[(i + 8) & 15] ^ \
+ block->l[(i + 2) & 15] ^ \
+ block->l[i & 15], 1))
+
+#define R0_L(v, w, x, y, z, i) \
+ z += ((w & (x ^ y)) ^ y) + blk0L(i) + 0x5A827999 + rol(v, 5); \
+ w = rol(w, 30);
+#define R0_B(v, w, x, y, z, i) \
+ z += ((w & (x ^ y)) ^ y) + blk0B(i) + 0x5A827999 + rol(v, 5); \
+ w = rol(w, 30);
+#define R1(v, w, x, y, z, i) \
+ z += ((w & (x ^ y)) ^ y) + blk(i) + 0x5A827999 + rol(v, 5); \
+ w = rol(w, 30);
+#define R2(v, w, x, y, z, i) \
+ z += (w ^ x ^ y) + blk(i) + 0x6ED9EBA1 + rol(v, 5); \
+ w = rol(w, 30);
+#define R3(v, w, x, y, z, i) \
+ z += (((w | x) & y) | (w & x)) + blk(i) + 0x8F1BBCDC + rol(v, 5); \
+ w = rol(w, 30);
+#define R4(v, w, x, y, z, i) \
+ z += (w ^ x ^ y) + blk(i) + 0xCA62C1D6 + rol(v, 5); \
+ w = rol(w, 30);
+
+#define R0A_L(i) R0_L(a, b, c, d, e, i)
+#define R0B_L(i) R0_L(b, c, d, e, a, i)
+#define R0C_L(i) R0_L(c, d, e, a, b, i)
+#define R0D_L(i) R0_L(d, e, a, b, c, i)
+#define R0E_L(i) R0_L(e, a, b, c, d, i)
+
+#define R0A_B(i) R0_B(a, b, c, d, e, i)
+#define R0B_B(i) R0_B(b, c, d, e, a, i)
+#define R0C_B(i) R0_B(c, d, e, a, b, i)
+#define R0D_B(i) R0_B(d, e, a, b, c, i)
+#define R0E_B(i) R0_B(e, a, b, c, d, i)
+
+#define R1A(i) R1(a, b, c, d, e, i)
+#define R1B(i) R1(b, c, d, e, a, i)
+#define R1C(i) R1(c, d, e, a, b, i)
+#define R1D(i) R1(d, e, a, b, c, i)
+#define R1E(i) R1(e, a, b, c, d, i)
+
+#define R2A(i) R2(a, b, c, d, e, i)
+#define R2B(i) R2(b, c, d, e, a, i)
+#define R2C(i) R2(c, d, e, a, b, i)
+#define R2D(i) R2(d, e, a, b, c, i)
+#define R2E(i) R2(e, a, b, c, d, i)
+
+#define R3A(i) R3(a, b, c, d, e, i)
+#define R3B(i) R3(b, c, d, e, a, i)
+#define R3C(i) R3(c, d, e, a, b, i)
+#define R3D(i) R3(d, e, a, b, c, i)
+#define R3E(i) R3(e, a, b, c, d, i)
+
+#define R4A(i) R4(a, b, c, d, e, i)
+#define R4B(i) R4(b, c, d, e, a, i)
+#define R4C(i) R4(c, d, e, a, b, i)
+#define R4D(i) R4(d, e, a, b, c, i)
+#define R4E(i) R4(e, a, b, c, d, i)
+
+
+static void SHA1_transform(uint32_t state[5], const unsigned char buffer[64])
+{
+ uint32_t a, b, c, d, e;
+
+ typedef union {
+ unsigned char c[64];
+ uint32_t l[16];
+ } u;
+
+ unsigned char w[64];
+ u *block = (u *) w;
+
+ memcpy(block, buffer, 64);
+
+ a = state[0];
+ b = state[1];
+ c = state[2];
+ d = state[3];
+ e = state[4];
+
+ static uint32_t endianness_indicator = 0x1;
+ if (((unsigned char *) &endianness_indicator)[0]) {
+ R0A_L( 0);
+ R0E_L( 1); R0D_L( 2); R0C_L( 3); R0B_L( 4); R0A_L( 5);
+ R0E_L( 6); R0D_L( 7); R0C_L( 8); R0B_L( 9); R0A_L(10);
+ R0E_L(11); R0D_L(12); R0C_L(13); R0B_L(14); R0A_L(15);
+ }
+ else {
+ R0A_B( 0);
+ R0E_B( 1); R0D_B( 2); R0C_B( 3); R0B_B( 4); R0A_B( 5);
+ R0E_B( 6); R0D_B( 7); R0C_B( 8); R0B_B( 9); R0A_B(10);
+ R0E_B(11); R0D_B(12); R0C_B(13); R0B_B(14); R0A_B(15);
+ }
+ R1E(16); R1D(17); R1C(18); R1B(19); R2A(20);
+ R2E(21); R2D(22); R2C(23); R2B(24); R2A(25);
+ R2E(26); R2D(27); R2C(28); R2B(29); R2A(30);
+ R2E(31); R2D(32); R2C(33); R2B(34); R2A(35);
+ R2E(36); R2D(37); R2C(38); R2B(39); R3A(40);
+ R3E(41); R3D(42); R3C(43); R3B(44); R3A(45);
+ R3E(46); R3D(47); R3C(48); R3B(49); R3A(50);
+ R3E(51); R3D(52); R3C(53); R3B(54); R3A(55);
+ R3E(56); R3D(57); R3C(58); R3B(59); R4A(60);
+ R4E(61); R4D(62); R4C(63); R4B(64); R4A(65);
+ R4E(66); R4D(67); R4C(68); R4B(69); R4A(70);
+ R4E(71); R4D(72); R4C(73); R4B(74); R4A(75);
+ R4E(76); R4D(77); R4C(78); R4B(79);
+
+ state[0] += a;
+ state[1] += b;
+ state[2] += c;
+ state[3] += d;
+ state[4] += e;
+}
+
+
+typedef struct
+{
+ uint32_t state[5];
+ uint32_t count[2];
+ unsigned char buffer[64];
+} SHA1Context;
+
+
+static void SHA1_init(SHA1Context *context)
+{
+ context->state[0] = 0x67452301;
+ context->state[1] = 0xEFCDAB89;
+ context->state[2] = 0x98BADCFE;
+ context->state[3] = 0x10325476;
+ context->state[4] = 0xC3D2E1F0;
+ context->count[0] = context->count[1] = 0;
+}
+
+
+static void SHA1_update(SHA1Context *context, const unsigned char *data,
+ unsigned int len)
+{
+ uint32_t i, j;
+
+ j = (context->count[0] >> 3) & 63;
+
+ if ((context->count[0] += len << 3) < (len << 3)) {
+ context->count[1]++;
+ }
+
+ context->count[1] += (len >> 29);
+
+ if ((j + len) > 63) {
+ memcpy(&(context->buffer[j]), data, (i = 64 - j));
+ SHA1_transform(context->state, context->buffer);
+ for ( ; (i + 63) < len; i += 64) {
+ SHA1_transform(context->state, &(data[i]));
+ }
+ j = 0;
+ }
+ else {
+ i = 0;
+ }
+
+ memcpy(&(context->buffer[j]), &(data[i]), len - i);
+}
+
+
+static void SHA1_final(unsigned char digest[20], SHA1Context *context)
+{
+ uint32_t i;
+ unsigned char finalcount[8];
+
+ for (i = 0; i < 8; i++) {
+ finalcount[i] = (unsigned char)
+ ((context->count[(i >= 4 ? 0 : 1)] >>
+ ((3 - (i & 3)) * 8)) & 255);
+ }
+
+ SHA1_update(context, (unsigned char *) "\200", 1);
+
+ while ((context->count[0] & 504) != 448) {
+ SHA1_update(context, (unsigned char *) "\0", 1);
+ }
+
+ SHA1_update(context, finalcount, 8);
+
+ for (i = 0; i < 20; i++) {
+ digest[i] = (unsigned char)
+ ((context->state[i >> 2] >> ((3 - (i & 3)) * 8)) & 255);
+ }
+
+ memset(context->buffer, 0, 64);
+ memset(context->state, 0, 20);
+ memset(context->count, 0, 8);
+ memset(&finalcount, 0, 8);
+
+ SHA1_transform(context->state, context->buffer);
+}
+
+
+// HMAC-SHA-1:
+//
+// K - is key padded with zeros to 512 bits
+// m - is message
+// OPAD - 0x5c5c5c...
+// IPAD - 0x363636...
+//
+// HMAC(K,m) = SHA1((K ^ OPAD) . SHA1((K ^ IPAD) . m))
+void HMAC_SHA1(unsigned char hmac[20], const unsigned char *key, int key_len,
+ const unsigned char *message, int message_len)
+{
+ unsigned char kopad[64], kipad[64];
+ int i;
+
+ if (key_len > 64) {
+ key_len = 64;
+ }
+
+ for (i = 0; i < key_len; i++) {
+ kopad[i] = key[i] ^ 0x5c;
+ kipad[i] = key[i] ^ 0x36;
+ }
+
+ for ( ; i < 64; i++) {
+ kopad[i] = 0 ^ 0x5c;
+ kipad[i] = 0 ^ 0x36;
+ }
+
+ unsigned char digest[20];
+
+ SHA1Context context;
+
+ SHA1_init(&context);
+ SHA1_update(&context, kipad, 64);
+ SHA1_update(&context, message, message_len);
+ SHA1_final(digest, &context);
+
+ SHA1_init(&context);
+ SHA1_update(&context, kopad, 64);
+ SHA1_update(&context, digest, 20);
+ SHA1_final(hmac, &context);
+}
+
+#define rot(x,k) (((x) << (k)) | ((x) >> (32 - (k))))
+
+uint64_t hash(const unsigned char *k, int length)
+{
+ uint32_t a, b, c;
+
+ a = b = c = 0xdeadbeef + ((uint32_t) length);
+
+ static uint32_t endianness_indicator = 0x1;
+ if (((unsigned char *) &endianness_indicator)[0]) {
+ while (length > 12) {
+ a += k[0];
+ a += ((uint32_t) k[1]) << 8;
+ a += ((uint32_t) k[2]) << 16;
+ a += ((uint32_t) k[3]) << 24;
+ b += k[4];
+ b += ((uint32_t) k[5]) << 8;
+ b += ((uint32_t) k[6]) << 16;
+ b += ((uint32_t) k[7]) << 24;
+ c += k[8];
+ c += ((uint32_t) k[9]) << 8;
+ c += ((uint32_t) k[10]) << 16;
+ c += ((uint32_t) k[11]) << 24;
+ a -= c; a ^= rot(c, 4); c += b;
+ b -= a; b ^= rot(a, 6); a += c;
+ c -= b; c ^= rot(b, 8); b += a;
+ a -= c; a ^= rot(c, 16); c += b;
+ b -= a; b ^= rot(a, 19); a += c;
+ c -= b; c ^= rot(b, 4); b += a;
+ length -= 12;
+ k += 12;
+ }
+
+ switch(length) {
+ case 12: c += ((uint32_t) k[11]) << 24;
+ case 11: c += ((uint32_t) k[10]) << 16;
+ case 10: c += ((uint32_t) k[9]) << 8;
+ case 9 : c += k[8];
+ case 8 : b += ((uint32_t) k[7]) << 24;
+ case 7 : b += ((uint32_t) k[6]) << 16;
+ case 6 : b += ((uint32_t) k[5]) << 8;
+ case 5 : b += k[4];
+ case 4 : a += ((uint32_t) k[3]) << 24;
+ case 3 : a += ((uint32_t) k[2]) << 16;
+ case 2 : a += ((uint32_t) k[1]) << 8;
+ case 1 : a += k[0]; break;
+ case 0 : goto end;
+ }
+ }
+ else {
+ while (length > 12) {
+ a += ((uint32_t) k[0]) << 24;
+ a += ((uint32_t) k[1]) << 16;
+ a += ((uint32_t) k[2]) << 8;
+ a += ((uint32_t) k[3]);
+ b += ((uint32_t) k[4]) << 24;
+ b += ((uint32_t) k[5]) << 16;
+ b += ((uint32_t) k[6]) << 8;
+ b += ((uint32_t) k[7]);
+ c += ((uint32_t) k[8]) << 24;
+ c += ((uint32_t) k[9]) << 16;
+ c += ((uint32_t) k[10]) << 8;
+ c += ((uint32_t) k[11]);
+ a -= c; a ^= rot(c, 4); c += b;
+ b -= a; b ^= rot(a, 6); a += c;
+ c -= b; c ^= rot(b, 8); b += a;
+ a -= c; a ^= rot(c, 16); c += b;
+ b -= a; b ^= rot(a, 19); a += c;
+ c -= b; c ^= rot(b, 4); b += a;
+ length -= 12;
+ k += 12;
+ }
+
+ switch(length) {
+ case 12: c += k[11];
+ case 11: c += ((uint32_t) k[10]) << 8;
+ case 10: c += ((uint32_t) k[9]) << 16;
+ case 9 : c += ((uint32_t) k[8]) << 24;
+ case 8 : b += k[7];
+ case 7 : b += ((uint32_t) k[6]) << 8;
+ case 6 : b += ((uint32_t) k[5]) << 16;
+ case 5 : b += ((uint32_t) k[4]) << 24;
+ case 4 : a += k[3];
+ case 3 : a += ((uint32_t) k[2]) << 8;
+ case 2 : a += ((uint32_t) k[1]) << 16;
+ case 1 : a += ((uint32_t) k[0]) << 24; break;
+ case 0 : goto end;
+ }
+ }
+
+ c ^= b; c -= rot(b, 14);
+ a ^= c; a -= rot(c, 11);
+ b ^= a; b -= rot(a, 25);
+ c ^= b; c -= rot(b, 16);
+ a ^= c; a -= rot(c, 4);
+ b ^= a; b -= rot(a, 14);
+ c ^= b; c -= rot(b, 24);
+
+ end:
+ return ((((uint64_t) c) << 32) | b);
+}
+
+int is_blank(char c)
+{
+ return ((c == ' ') || (c == '\t'));
+}
--- /dev/null
+<?xml version="1.0" encoding="UTF-8"?>
+<!-- each elementxx is 9 characters long, + slash gives 10 characters -->
+<element00>
+<element01>
+<element02>
+<element03>
+<element04>
+<element05>
+<element06>
+<element07>
+<element08>
+<element09>
+<element10>
+<element11>
+<element12>
+<element13>
+<element14>
+<element15>
+<element16>
+<element17>
+<element18>
+<element19>
+<element20>
+<element21>
+<element22>
+<element23>
+<element24>
+<element25>
+<element26>
+<element27>
+<element28>
+<element29>
+<element30>
+<element31>
+<element32>
+<element33>
+<element34>
+<element35>
+<element36>
+<element37>
+<element38>
+<element39>
+<element40>
+<element41>
+<element42>
+<element43>
+<element44>
+<element45>
+<element46>
+<element47>
+<element48>
+<element49>
+<element50xxx>
+Data
+</element50xxx>
+</element49>
+</element48>
+</element47>
+</element46>
+</element45>
+</element44>
+</element43>
+</element42>
+</element41>
+</element40>
+</element39>
+</element38>
+</element37>
+</element36>
+</element35>
+</element34>
+</element33>
+</element32>
+</element31>
+</element30>
+</element29>
+</element28>
+</element27>
+</element26>
+</element25>
+</element24>
+</element23>
+</element22>
+</element21>
+</element20>
+</element19>
+</element18>
+</element17>
+</element16>
+</element15>
+</element14>
+</element13>
+</element12>
+</element11>
+</element10>
+</element09>
+</element08>
+</element07>
+</element06>
+</element05>
+</element04>
+</element03>
+</element02>
+</element01>
+</element00>
--- /dev/null
+<?xml version="1.0" encoding="UTF-8"?>
+<Error>
+ <Code>NoSuchKey</Code>
+ <Message> The resource <![CDATA[<now> & then]]> you requested does not exist & so there </Message>
+ <Resource>/mybucket/myfoto.jpg</Resource>
+ <RequestId>4442587FB7D0A2F9</RequestId>
+</Error>
--- /dev/null
+<?xml version="1.0" encoding="UTF-8"?>
+<!-- each elementxx is 9 characters long, + slash gives 10 characters -->
+<element00>
+<element01>
+<element02>
+<element03>
+<element04>
+<element05>
+<element06>
+<element07>
+<element08>
+<element09>
+<element10>
+<element11>
+<element12>
+<element13>
+<element14>
+<element15>
+<element16>
+<element17>
+<element18>
+<element19>
+<element20>
+<element21>
+<element22>
+<element23>
+<element24>
+<element25>
+<element26>
+<element27>
+<element28>
+<element29>
+<element30>
+<element31>
+<element32>
+<element33>
+<element34>
+<element35>
+<element36>
+<element37>
+<element38>
+<element39>
+<element40>
+<element41>
+<element42>
+<element43>
+<element44>
+<element45>
+<element46>
+<element47>
+<element48>
+<element49>
+<element50xx>
+Data
+</element50xx>
+</element49>
+</element48>
+</element47>
+</element46>
+</element45>
+</element44>
+</element43>
+</element42>
+</element41>
+</element40>
+</element39>
+</element38>
+</element37>
+</element36>
+</element35>
+</element34>
+</element33>
+</element32>
+</element31>
+</element30>
+</element29>
+</element28>
+</element27>
+</element26>
+</element25>
+</element24>
+</element23>
+</element22>
+</element21>
+</element20>
+</element19>
+</element18>
+</element17>
+</element16>
+</element15>
+</element14>
+</element13>
+</element12>
+</element11>
+</element10>
+</element09>
+</element08>
+</element07>
+</element06>
+</element05>
+</element04>
+</element03>
+</element02>
+</element01>
+</element00>
--- /dev/null
+<?xml version="1.0" encoding="UTF-8"?>
+<longdata>
+1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
+12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
+12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
+12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
+12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
+12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
+12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
+12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
+12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
+</longdata>
+
--- /dev/null
+#!/bin/sh
+
+# Environment:
+# S3_ACCESS_KEY_ID - must be set to S3 Access Key ID
+# S3_SECRET_ACCESS_KEY - must be set to S3 Secret Access Key
+# TEST_BUCKET_PREFIX - must be set to the test bucket prefix to use
+# S3_COMMAND - may be set to s3 command to use (i.e. valgrind s3); defaults
+# to "s3"
+
+if [ -z "$S3_ACCESS_KEY_ID" ]; then
+ echo "S3_ACCESS_KEY_ID required"
+ exit -1;
+fi
+
+if [ -z "$S3_SECRET_ACCESS_KEY" ]; then
+ echo "S3_SECRET_ACCESS_KEY required"
+ exit -1;
+fi
+
+if [ -z "$TEST_BUCKET_PREFIX" ]; then
+ echo "TEST_BUCKET_PREFIX required"
+ exit -1;
+fi
+
+if [ -z "$S3_COMMAND" ]; then
+ S3_COMMAND=s3
+fi
+
+TEST_BUCKET=${TEST_BUCKET_PREFIX}.testbucket
+
+# Create the test bucket in EU
+echo "$S3_COMMAND create $TEST_BUCKET locationConstraint=EU"
+$S3_COMMAND create $TEST_BUCKET
+
+# List to find it
+echo "$S3_COMMAND list | grep $TEST_BUCKET"
+$S3_COMMAND list | grep $TEST_BUCKET
+
+# Test it
+echo "$S3_COMMAND test $TEST_BUCKET"
+$S3_COMMAND test $TEST_BUCKET
+
+# List to ensure that it is empty
+echo "$S3_COMMAND list $TEST_BUCKET"
+$S3_COMMAND list $TEST_BUCKET
+
+# Put some data
+rm -f seqdata
+seq 1 10000 > seqdata
+echo "$S3_COMMAND put $TEST_BUCKET/testkey filename=seqdata noStatus=1"
+$S3_COMMAND put $TEST_BUCKET/testkey filename=seqdata noStatus=1
+
+rm -f testkey
+# Get the data and make sure that it matches
+echo "$S3_COMMAND get $TEST_BUCKET/testkey filename=testkey"
+$S3_COMMAND get $TEST_BUCKET/testkey filename=testkey
+diff seqdata testkey
+rm -f seqdata testkey
+
+# Delete the file
+echo "$S3_COMMAND delete $TEST_BUCKET/testkey"
+$S3_COMMAND delete $TEST_BUCKET/testkey
+
+# Remove the test bucket
+echo "$S3_COMMAND delete $TEST_BUCKET"
+$S3_COMMAND delete $TEST_BUCKET
+
+# Make sure it's not there
+echo "$S3_COMMAND list | grep $TEST_BUCKET"
+$S3_COMMAND list | grep $TEST_BUCKET
+
+# Now create it again
+echo "$S3_COMMAND create $TEST_BUCKET"
+$S3_COMMAND create $TEST_BUCKET
+
+# Put 10 files in it
+for i in `seq 0 9`; do
+ echo "echo \"Hello\" | $S3_COMMAND put $TEST_BUCKET/key_$i"
+ echo "Hello" | $S3_COMMAND put $TEST_BUCKET/key_$i
+done
+
+# List with all details
+echo "$S3_COMMAND list $TEST_BUCKET allDetails=1"
+$S3_COMMAND list $TEST_BUCKET allDetails=1
+
+COPY_BUCKET=${TEST_BUCKET_PREFIX}.copybucket
+
+# Create another test bucket and copy a file into it
+echo "$S3_COMMAND create $COPY_BUCKET"
+$S3_COMMAND create $COPY_BUCKET
+echo <<EOF
+$S3_COMMAND copy $TEST_BUCKET/key_5 $COPY_BUCKET/copykey
+EOF
+$S3_COMMAND copy $TEST_BUCKET/key_5 $COPY_BUCKET/copykey
+
+# List the copy bucket
+echo "$S3_COMMAND list $COPY_BUCKET allDetails=1"
+$S3_COMMAND list $COPY_BUCKET allDetails=1
+
+# Compare the files
+rm -f key_5 copykey
+echo "$S3_COMMAND get $TEST_BUCKET/key_5 filename=key_5"
+$S3_COMMAND get $TEST_BUCKET/key_5 filename=key_5
+echo "$S3_COMMAND get $COPY_BUCKET/copykey filename=copykey"
+$S3_COMMAND get $COPY_BUCKET/copykey filename=copykey
+diff key_5 copykey
+rm -f key_5 copykey
+
+# Delete the files
+for i in `seq 0 9`; do
+ echo "$S3_COMMAND delete $TEST_BUCKET/key_$i"
+ $S3_COMMAND delete $TEST_BUCKET/key_$i
+done
+echo "$S3_COMMAND delete $COPY_BUCKET/copykey"
+$S3_COMMAND delete $COPY_BUCKET/copykey
+
+# Delete the copy bucket
+echo "$S3_COMMAND delete $COPY_BUCKET"
+$S3_COMMAND delete $COPY_BUCKET
+
+# Now create a new zero-length file
+echo "$S3_COMMAND put $TEST_BUCKET/aclkey < /dev/null"
+$S3_COMMAND put $TEST_BUCKET/aclkey < /dev/null
+
+# Get the bucket acl
+rm -f acl
+echo "$S3_COMMAND getacl $TEST_BUCKET filename=acl allDetails=1"
+$S3_COMMAND getacl $TEST_BUCKET filename=acl allDetails=1
+
+# Add READ for all AWS users, and READ_ACP for everyone
+echo <<EOF >> acl
+Group Authenticated AWS Users READ
+EOF
+echo <<EOF >> acl
+Group All Users READ_ACP
+EOF
+echo "$S3_COMMAND setacl $TEST_BUCKET filename=acl"
+$S3_COMMAND setacl $TEST_BUCKET filename=acl
+
+# Test to make sure that it worked
+rm -f acl_new
+echo "$S3_COMMAND getacl $TEST_BUCKET filename=acl_new allDetails=1"
+$S3_COMMAND getacl $TEST_BUCKET filename=acl_new allDetails=1
+diff acl acl_new
+rm -f acl acl_new
+
+# Get the key acl
+rm -f acl
+echo "$S3_COMMAND getacl $TEST_BUCKET/aclkey filename=acl allDetails=1"
+$S3_COMMAND getacl $TEST_BUCKET/aclkey filename=acl allDetails=1
+
+# Add READ for all AWS users, and READ_ACP for everyone
+echo <<EOF >> acl
+Group Authenticated AWS Users READ
+EOF
+echo <<EOF >> acl
+Group All Users READ_ACP
+EOF
+echo "$S3_COMMAND setacl $TEST_BUCKET/aclkey filename=acl"
+$S3_COMMAND setacl $TEST_BUCKET/aclkey filename=acl
+
+# Test to make sure that it worked
+rm -f acl_new
+echo "$S3_COMMAND getacl $TEST_BUCKET/aclkey filename=acl_new allDetails=1"
+$S3_COMMAND getacl $TEST_BUCKET/aclkey filename=acl_new allDetails=1
+diff acl acl_new
+rm -f acl acl_new
+
+# Remove the test file
+echo "$S3_COMMAND delete $TEST_BUCKET/aclkey"
+$S3_COMMAND delete $TEST_BUCKET/aclkey
+echo "$S3_COMMAND delete $TEST_BUCKET"
+$S3_COMMAND delete $TEST_BUCKET