Further reading:
The HTTP (HyperText Transfer Protocol) is the W3C standard protocol for transferring information between a web-client (browser) and a web-server. The protocol is a simple envelope protocol where standard name/value pairs in the header are used to split the stream into messages and communicate about the connection-status. Many languages have client and or server libraries to deal with the HTTP protocol, making it a suitable candidate for general purpose client-server applications.
In this document we describe a modular infra-structure to access web-servers from SWI-Prolog and turn Prolog into a web-server.
This work has been carried out under the following projects: GARP, MIA, IBROW, KITS and MultiMediaN The following people have pioneered parts of this library and contributed with bug-report and suggestions for improvements: Anjo Anjewierden, Bert Bredeweg, Wouter Jansweijer, Bob Wielinga, Jacco van Ossenbruggen, Michiel Hildebrandt, Matt Lilley and Keri Harris.
This package provides two packages for building HTTP clients. The
first,
library(http/http_open)
is a lightweight library for
opening a HTTP URL address as a Prolog stream. It can only deal with the
HTTP GET protocol. The second, library(http/http_client)
is
a more advanced library dealing with keep-alive, chunked
transfer and a plug-in mechanism providing conversions based on the
MIME content-type.
This library provides a light-weight HTTP client library to get the data from a URL. The functionality of the library can be extended by loading two additional modules that acts as plugins:
POST
method in addition to GET
, HEAD
and DELETE
.
https
is requested using
a default SSL context. See the plugin for additional information
regarding security.
Here is a simple example to fetch a web-page:
?- http_open('http://www.google.com/search?q=prolog', In, []), copy_stream_data(In, user_output), close(In). <!doctype html><head><title>prolog - Google Search</title><script> ...
The example below fetches the modification time of a web-page. Note that Modified is '' (the empty atom) if the web-server does not provide a time-stamp for the resource. See also parse_time/2.
modified(URL, Stamp) :- http_open(URL, In, [ method(head), header(last_modified, Modified) ]), close(In), Modified \== '', parse_time(Modified, Stamp).
basic(User,Password)
.
See also http_set_authorization/2.
header(Name,Value)
option.
get
(default), head
or delete
.
The head
message can be used in combination with the header(Name, Value)
option to access information on the resource without actually fetching
the resource itself. The returned stream must be closed immediately.
If library(http/http_header)
is loaded, http_open/3
also supports post
and put
. See the post(Data)
option.
Content-Length
in the reply header.
Major-Minor
, where Major
and Minor are integers representing the HTTP version in the
reply header.
infinite
).
library(http/http_header)
is also loaded. Data
is handed to http_post_data/3.
proxy(+Host:Port)
. Deprecated.
authorization
option.
true
, bypass proxy hooks. Default is false
.
User-Agent
field of the HTTP
header. Default is SWI-Prolog
.
The hook http:open_options/2 can be used to provide default options based on the broken-down URL.
URL | is either an atom (url) or
a list of parts. If this list is provided, it may contain the
fields
scheme , user , password , host , port , path
and
search (where the argument of the latter is a list of
Name(Value) or Name=Value). Only host is mandatory. The
following example below opens the URL
http://www.example.com/my/path?q=Hello%20World&lang=en .
Note that values must not be quoted because the library inserts
the required quotes.
http_open([ host('www.example.com'), path('/my/path'), search([ q='Hello world', lang=en ]) ]) |
-
, possibly
defined authorization is cleared. For example:
?- http_set_authorization('http://www.example.com/private/', basic('John', 'Secret'))
http
and
https
URLs for Mode == read
.:- multifile http:open_options/2. http:open_options(Parts, Options) :- option(host(Host), Parts), Host \== localhost, Options = [proxy('proxy.local', 3128)].
This hook may return multiple solutions. The returned options are combined using merge_options/3 where earlier solutions overrule later solutions.
Cookie:
header for the current connection. Out
is an open stream to the HTTP server, Parts is the
broken-down request (see uri_components/2)
and Options is the list of options passed to http_open. The
predicate is called as if using ignore/1.
Set-Cookie
field, Parts is the broken-down
request (see
uri_components/2) and Options
is the list of options passed to http_open.
library(http/http_client)
libraryThe library(http/http_client)
library provides more
powerful access to reading HTTP resources, providing keep-alive
connections,
chunked transfer and conversion of the content, such as
breaking down multipart data, parsing HTML, etc. The library
announces itself as providing HTTP/1.1
.
close
(default) a new connection is created for this
request and closed after the request has completed. If 'Keep-Alive'
the library checks for an open connection on the requested host and port
and re-uses this connection. The connection is left open if the other
party confirms the keep-alive and closed otherwise.1.1
.authorization
option.infinite
).User-Agent
field of the HTTP
header. Default is SWI-Prolog (http://www.swi-prolog.org)
.Unit(From,
To)
, where From is an integer and To is
either an integer or the atom end
. HTTP 1.1 only supports Unit
=
bytes
. E.g., to ask for bytes 1000-1999, use the option
range(bytes(1000,1999))
.Remaining options are passed to http_read_data/3.
Name(Value)
pairs to guide the translation of the data. The following options are
supported:
Content-Type
as provided by the HTTP reply
header. Intended as a work-around for badly configured servers.
If no to(Target)
option is provided the library tries
the registered plug-in conversion filters. If none of these succeed it
tries the built-in content-type handlers or returns the content as an
atom. The builtin content filters are described below. The provided
plug-ins are described in the following sections.
Finally, if all else fails the content is returned as an atom.
library(html_write)
described in section section 3.18.text/xml
.xml(XMLTerm)
, using the provided MIME type.Content-type
equals Type.application/x-www-form-urlencoded
as produced by browsers issuing a POST request from an HTML form.
ListOfParameter is a list of Name=Value
or
Name(Value) .multipart/form-data
as produced
by browsers issuing a POST request from an HTML form using enctype
multipart/form-data
. This is a somewhat simplified MIME
multipart/mixed
encoding used by browser forms including
file input fields. ListOfData is the same as for the List
alternative described below. Below is an example from the SWI-Prolog
Sesame interface. Repository,
etc. are atoms providing the value, while the last argument provides a
value from a file.
..., http_post([ protocol(http), host(Host), port(Port), path(ActionPath) ], form_data([ repository = Repository, dataFormat = DataFormat, baseURI = BaseURI, verifyData = Verify, data = file(File) ]), _Reply, []), ...,
multipart/mixed
and packed using mime_pack/3.
See
mime_pack/3
for details on the argument format.
This plug-in library library(http/http_mime_plugin)
breaks multipart documents that are recognised by the Content-Type:
multipart/form-data
or Mime-Version: 1.0
in the
header into a list of Name = Value pairs. This
library deals with data from web-forms using the multipart/form-data
encoding as well as the FIPA
agent-protocol messages.
This plug-in library library(http/http_sgml_plugin)
provides a bridge between the SGML/XML/HTML parser provided by library(sgml)
and the http client library. After loading this hook the following
mime-types are automatically handled by the SGML parser.
library(sgml)
using W3C HTML 4.0 DTD, suppressing
and ignoring all HTML syntax errors. Options is passed to
load_structure/3.library(sgml)
using dialect xmlns
(XML + namespaces).
Options is passed to load_structure/3.
In particular,
dialect(xml)
may be used to suppress namespace handling.library(sgml)
using dialect sgml
. Options
is passed to load_structure/3.
The HTTP server library consists of two parts obligatory and one
optional part. The first deals with connection management and has three
different implementation depending on the desired type of server. The
second implements a generic wrapper for decoding the HTTP request,
calling user code to handle the request and encode the answer. The
optional http_dispatch
module can be used to assign HTTP
locations (paths) to predicates. This design is summarised in
figure 1.
The functional body of the user's code is independent from the selected server-type, making it easy to switch between the supported server types.
The server-body is the code that handles the request and formulates a
reply. To facilitate all mentioned setups, the body is driven by
http_wrapper/5.
The goal is called with the parsed request (see
section 3.11) as argument and current_output
set to a temporary buffer. Its task is closely related to the task of a
CGI script; it must write a header declaring holding at least the
Content-type
field and a body. Here is a simple body
writing the request as an HTML table.
reply(Request) :- format('Content-type: text/html~n~n', []), format('<html>~n', []), format('<table border=1>~n'), print_request(Request), format('~n</table>~n'), format('</html>~n', []). print_request([]). print_request([H|T]) :- H =.. [Name, Value], format('<tr><td>~w<td>~w~n', [Name, Value]), print_request(T).
The infrastructure recognises the header fields described below.
Other header lines are passed verbatim to the client. Typical examples
are
Set-Cookie
and authentication headers (see section
3.7).
text/*
or the type matches with UTF-8
(case insensitive), the server uses UTF-8 encoding. The user may force
UTF-8 encoding for arbitrary content types by adding ;
charset=UTF-8
to the end of the Content-type
header.chunked
option in http_handler/3.Status
header to force
a
redirect response to the given URL. The message body
must be empty. Handling this header is primarily intended for
compatibility with the CGI conventions. Prolog code should use
http_redirect/3.Location
, where Status
must be one of 301 (moved), 302 (moved temporary, default) or 303 (see
other).
Besides returning a page by writing it to the current output stream,
the server goal can raise an exception using throw/1
to generate special pages such as not_found
, moved
,
etc. The defined exceptions are:
http_reply(Reply,[])
.http_reply(not_modified,[])
. This exception
is for backward compatibility and can be used by the server to indicate
the referenced resource has not been modified since it was requested
last time.
In addition, the normal "200 OK"
reply status may be
overruled by writing a CGI Status
header prior to the
remainder of the message. This is particularly useful for defining REST
APIs. The following handler replies with a "201 Created"
header:
handle_request(Request) :- process_data(Request, Id), % application predicate format('Status: 201~n'), format('Content-type: text/plain~n~n'), format('Created object as ~q~n', [Id]).
This module can be placed between http_wrapper.pl
and
the application code to associate HTTP locations to predicates
that serve the pages. In addition, it associates parameters with
locations that deal with timeout handling and user authentication. The
typical setup is:
server(Port, Options) :- http_server(http_dispatch, [ port(Port) | Options ]). :- http_handler('/index.html', write_index, []). write_index(Request) :- ...
http_path.pl
. If an HTTP
request arrives at the server that matches Path, Closure
is called with one extra argument: the parsed HTTP request.
Options is a list containing the following options:
http_authenticate.pl
provides a plugin for user/password based Basic
HTTP
authentication.
Transfer-encoding: chunked
if the client allows for it.
true
on a prefix-handler (see prefix), possible children
are masked. This can be used to (temporary) overrule part of the tree.
:- http_handler(/, http_404([index('index.html')]), [spawn(my_pool),prefix]).
infinite
, default
or a positive number
(seconds). If default
, the value from the setting
http:time_limit
is taken. The default of this setting is
300 (5 minutes). See setting/2.
Note that http_handler/3 is normally invoked as a directive and processed using term-expansion. Using term-expansion ensures proper update through make/0 when the specification is modified. We do not expand when the cross-referencer is running to ensure proper handling of the meta-call.
:
PredNameroot(user_details)
)
is irrelevant in this equation and HTTP locations can thus be moved
freely without breaking this code fragment.
:- http_handler(root(user_details), user_details, []). user_details(Request) :- http_parameters(Request, [ user_id(ID) ]), ... user_link(ID) --> { user_name(ID, Name), http_link_to_id(user_details, [id(ID)], HREF) }, html(a([class(user), href(HREF)], Name)).
Parameters | is one of
|
true
(default), handle If-modified-since and send
modification time.
false
) and, in addition to the plain file,
there is a =.gz= file that is not older than the plain file and the
client acceps gzip
encoding, send the compressed file with Transfer-encoding: gzip
.
false
(default), validate that FileSpec does
not contain references to parent directories. E.g., specifications such
as www('../../etc/passwd')
are not allowed.
If caching is not disabled, it processes the request headers
If-modified-since
and Range
.
alias(Sub)
, than Sub cannot have
references to parent directories.
:- http_handler(root(.), http_redirect(moved, myapp('index.html')), []).
How | is one of moved , moved_temporary
or see_other |
To | is an atom, a aliased path
as defined by
http_absolute_location/3.
or a term location_by_id(Id) . If To is not
absolute, it is resolved relative to the current location. |
HTTP 101 Switching Protocols"
reply. After sending
the reply, the HTTP library calls call(Goal, InStream, OutStream)
,
where InStream and OutStream are the raw streams to the HTTP client.
This allows the communication to continue using an an alternative
protocol.
If Goal fails or throws an exception, the streams are
closed by the server. Otherwise Goal is responsible for
closing the streams. Note that Goal runs in the HTTP handler
thread. Typically, the handler should be registered using the spawn
option if http_handler/3 or Goal
must call thread_create/3 to allow the
HTTP worker to return to the worker pool.
The streams use binary (octet) encoding and have their I/O timeout set to the server timeout (default 60 seconds). The predicate set_stream/2 can be used to change the encoding, change or cancel the timeout.
This predicate interacts with the server library by throwing an exception.
Options | is reserved for future extensions. It must be initialised to the empty list ([]). |
This module provides a simple API to generate an index for a physical directory. The index can be customised by overruling the dirindex.css CSS file and by defining additional rules for icons using the hook http:file_extension_icon/2.
The calling conventions allows for direct calling from http_handler/3.
//
name
(default), size
or time
.
ascending
. The altenative is
descending
absolute_file_name(icons(IconName), Path, [])
.
Although the SWI-Prolog web-server is intended to serve documents
that needed to be computed dynamically, serving plain files is sometimes
necessary. This small module combines the functionality of
http_reply_file/3 and http_reply_dirindex/3
to act as a simple web-server. Such a server can be created using the
following code sample, which starts a server at port 8080 that serves
files from the current directory ('.'). Note that the handler needs a prefix
option to specify it must handle all paths that begin with the registed
location of the handler.
:- use_module(library(http/thread_httpd)). :- use_module(library(http/http_dispatch)). :- http_handler(root(.), http_reply_from_files('.', []), [prefix]). :- initialization http_server(http_dispatch, [port(8080)]).
indexes
to locate an index file (see below) or
uses http_reply_dirindex/3
to create a listing of the directory.
Options:
Note that this handler must be tagged as a prefix
handler (see
http_handler/3 and module
introduction). This also implies that it is possible to override more
specific locations in the hierarchy using http_handler/3
with a longer path-specifier.
Dir | is either a directory or an path-specification as used by absolute_file_name/3. This option provides great flexibility in (re-)locating the physical files and allows merging the files of multiple physical locations into one web-hierarchy by using multiple user:file_search_path/2 clauses that define the same alias. |
This library defines session management based on HTTP cookies.
Session management is enabled simply by loading this module. Details can
be modified using http_set_session_options/1.
By default, this module creates a session whenever a request is
processes that is inside the hierarchy defined for session handling (see
path option in
http_set_session_options/1.
Automatic creation of a session can be stopped using the option create(noauto)
.
The predicate
http_open_session/2 must
be used to create a session if noauto
is enabled. Sessions
can be closed using http_close_session/1.
If a session is active, http_in_session/1 returns the current session and http_session_assert/1 and friends maintain data about the session. If the session is reclaimed, all associated data is reclaimed too.
Begin and end of sessions can be monitored using library(broadcast)
.
The broadcasted messages are:
For example, the following calls end_session(SessionId)
whenever a session terminates. Please note that sessions ends are not
scheduled to happen at the actual timeout moment of the session.
Instead, creating a new session scans the active list for timed-out
sessions. This may change in future versions of this library.
:- listen(http_session(end(SessionId, Peer)), end_session(SessionId)).
swipl_session
.
/
. Cookies are only sent if the HTTP request path is a
refinement of Path.
auto
(default), which creates a session if there is a request whose path
matches the defined session path or noauto
, in which cases
sessions are only created by calling
http_open_session/2
explicitely.
timeout
.
SessionId | is an atom. |
session(ID)
from the
current HTTP request (see http_current_request/1).
The value is cached in a backtrackable global variable http_session_id
.
Using a backtrackable global variable is safe because continuous worker
threads use a failure driven loop and spawned threads start without any
global variables. This variable can be set from the commandline to fake
running a goal from the commandline in the context of a session.
noauto
. Options:
true
(default false
) and the current
request is part of a session, generate a new session-id. By default,
this predicate returns the current session as obtained with
http_in_session/1.
http_session(end(SessionId, Peer))
The broadcast is done before the session data is destroyed and the listen-handlers are executed in context of the session that is being closed. Here is an example that destroys a Prolog thread that is associated to a thread:
:- listen(http_session(end(SessionId, _Peer)), kill_session_thread(SessionID)). kill_session_thread(SessionID) :- http_session_data(thread(ThreadID)), thread_signal(ThreadID, throw(session_closed)).
Succeed without any effect if SessionID does not refer to an active session.
If http_close_session/1
is called from a handler operating in the current session and the CGI
stream is still in state
header
, this predicate emits a Set-Cookie
to
expire the cookie.
This small module allows for enabling Cross-Origin Resource Sharing (CORS) for a specific request. Typically, CORS is enabled for API services that you want to have useable from browser client code that is loaded from another domain. An example are the LOD and SPARQL services in ClioPatria.
Because CORS is a security risc (see references), it is disabled by default. It is enabled through the setting http:cors. The value of this setting is a list of domains that are allowed to access the service. Because * is used as a wildcard match, the value [*] allows access from anywhere.
Services for which CORS is relevant must call cors_enable/0 as part of the HTTP response, as shown below. Note that cors_enable/0 is a no-op if the setting http:cors is set to the empty list ([]).
my_handler(Request) :- ...., cors_enable, reply_json(Response, []).
If a site uses a Preflight OPTIONS
request to
find the server's capabilities and access politics, cors_enable/2
can be used to formulate an appropriate reply. For example:
my_handler(Request) :- option(method(options), Request), !, cors_enable(Request, [ methods([get,post,delete]) ]), format('~n'). % 200 with empty body
Access-Control-Allow-Origin
using
domains from the setting http:cors. This this setting is[](default),
nothing is written. This predicate is typically used for replying to API
HTTP-request (e.g., replies to an AJAX request that typically serve JSON
or XML).OPTIONS
request. Request
is the HTTP request. Options provides:
GET
,
only allowing for read requests.
Both methods and headers may use Prolog friendly syntax, e.g.,
get
for a method and content_type
for a
header.
This module provides the basics to validate an HTTP Authorization
header. User and password information are read from a Unix/Apache
compatible password file.
This library provides, in addition to the HTTP authentication, predicates to read and write password files.
Basic
authetication and verify the password from PasswordFile. PasswordFile
is a file holding usernames and passwords in a format compatible to Unix
and Apache. Each line is record with :
separated fields.
The first field is the username and the second the password hash.
Password hashes are validated using crypt/2.
Successful authorization is cached for 60 seconds to avoid overhead of decoding and lookup of the user and password data.
http_authenticate/3 just validates the header. If authorization is not provided the browser must be challenged, in response to which it normally opens a user-password dialogue. Example code realising this is below. The exception causes the HTTP wrapper code to generate an HTTP 401 reply.
( http_authenticate(basic(passwd), Request, Fields) -> true ; throw(http_reply(authorise(basic, Realm))) ).
Fields | is a list of fields from the password-file entry. The first element is the user. The hash is skipped. |
Authorization
header. Data is a
term
Method(User, Password)
where Method is the (downcased) authorization method (typically
basic
), User is an atom holding the user name and Password
is a list of codes holding the password
passwd(User, Hash, Fields)
passwd(User, Hash, Fields)
It is possible to create arbitrary error pages for responses
generated when a http_reply term is thrown. Currently this is only
supported for status 403 (authentication required). To do this,
instead of throwing http_reply(authorise(Term))
throw
http_reply(authorise(Term), [], Key)
, where Key
is an arbitrary term relating to the page you want to generate. You must
then also define a clause of the multifile predicate
http:status_page_hook/3:
phrase(page([ title('401 Authorization Required') ], [ h1('Authorization Required'), p(['This server could not verify that you ', 'are authorized to access the document ', 'requested. Either you supplied the wrong ', 'credentials (e.g., bad password), or your ', 'browser doesn\'t understand how to supply ', 'the credentials required.' ]), \address ]), CustomHTML).
This library implements the OpenID protocol (http://openid.net/). OpenID is a protocol to share identities on the network. The protocol itself uses simple basic HTTP, adding reliability using digitally signed messages.
Steps, as seen from the consumer (or relying partner).
openid_identifier
openid_identifier
and lookup
<link rel="openid.server" href="server">
checkid_setup
,
asking to validate the given OpenID.
A consumer (an application that allows OpenID login) typically
uses this library through openid_user/3.
In addition, it must implement the hook http_openid:openid_hook(trusted(OpenId, Server))
to define accepted OpenID servers. Typically, this hook is used to
provide a white-list of aceptable servers. Note that accepting any
OpenID server is possible, but anyone on the internet can setup a dummy
OpenID server that simply grants and signs every request. Here is an
example:
:- multifile http_openid:openid_hook/1. http_openid:openid_hook(trusted(_, OpenIdServer)) :- ( trusted_server(OpenIdServer) -> true ; throw(http_reply(moved_temporary('/openid/trustedservers'))) ). trusted_server('http://www.myopenid.com/server').
By default, information who is logged on is maintained with the
session using http_session_assert/1
with the term openid(Identity)
. The hooks
login/logout/logged_in can be used to provide alternative administration
of logged-in users (e.g., based on client-IP, using cookies, etc.).
To create a server, you must do four things: bind the handlers
openid_server/2 and openid_grant/1
to HTTP locations, provide a user-page for registered users and define
the grant(Request, Options)
hook to verify your users. An
example server is provided in in
<plbase>/doc/packages/examples/demo_openid.pl
handler(Request) :- openid_user(Request, OpenID, []), ...
If the user is not yet logged on a sequence of redirects will follow:
verify
, which calls openid_verify/2.
Options:
//
img
structures where the href
points to an OpenID 2.0 endpoint. These buttons are displayed below the
OpenID URL field. Clicking the button sets the URL field and submits the
form. Requires Javascript support.
If the href
is relative, clicking it opens the
given location after adding 'openid.return_to' and `stay'.
true
, show a checkbox that allows the user to stay
logged on.
http_dispatch.pl
. Options
processes:
openid.trust_root
attribute. Defaults to the
root of the current server (i.e., http://host[.port]/
).
openid.realm
attribute. Default is the
trust_root
.
The OpenId server will redirect to the openid.return_to
URL.
OpenIDLogin | ID as typed by user (canonized) |
OpenID | ID as verified by server |
Server | URL of the OpenID server |
After openid_verify/2 has
redirected the browser to the OpenID server, and the OpenID
server did its magic, it redirects the browser back to this address. The
work is fairly trivial. If
mode
is cancel
, the OpenId server denied. If id_res
,
the OpenId server replied positive, but we must verify what the server
told us by checking the HMAC-SHA signature.
This call fails silently if their is no openid.mode
field in the request.
yes
, check the authority (typically the password) and if
all looks good redirect the browser to ReturnTo, adding the OpenID
properties needed by the Relying Party to verify the login.openid_associate(URL, Handle, Assoc, []).
http://specs.openid.net/auth/2.0
(default) or
http://openid.net/signon/1.1
.
The library library(http/http_parameters)
provides two
predicates to fetch HTTP request parameters as a type-checked list
easily. The library transparently handles both GET and POST requests. It
builds on top of the low-level request representation described in
section 3.11.
If a parameter is missing the exception
error(
is thrown which. If the argument cannot be converted to the requested
type, a
existence_error(http_parameter, Name)
, _)error(
is
raised, where the error context indicates the HTTP parameter. If not
caught, the server translates both errors into a existence_error(Type, Value)
, _)400 Bad request
HTTP message.
Options fall into three categories: those that handle presence of the parameter, those that guide conversion and restrict types and those that support automatic generation of documention. First, the presence-options:
default
and optional
are
ignored and the value is returned as a list. Type checking options are
processed on each value.list(Type)
.
The type and conversion options are given below. The type-language can be extended by providing clauses for the multifile hook http:convert_parameter/3.
;
(Type1, Type2)(nonneg;oneof([infinite]))
to
specify an integer or a symbolic value.The last set of options is to support automatic generation of HTTP
API documentation from the sources.2This
facility is under development in ClioPatria; see http_help.pl
.
Below is an example
reply(Request) :- http_parameters(Request, [ title(Title, [ optional(true) ]), name(Name, [ length >= 2 ]), age(Age, [ between(0, 150) ]) ]), ...
Same as http_parameters(Request, Parameters,[])
call(Goal, +ParamName, -Options)
to find the options.
Intended to share declarations over many calls to http_parameters/3.
Using this construct the above can be written as below.
reply(Request) :- http_parameters(Request, [ title(Title), name(Name), age(Age) ], [ attribute_declarations(param) ]), ... param(title, [optional(true)]). param(name, [length >= 2 ]). param(age, [integer]).
The body-code (see section 3.1) is
driven by a Request. This request is generated from http_read_request/2
defined in
library(http/http_header)
.
Name(Value)
elements. It provides a number of predefined elements for the result of
parsing the first line of the request, followed by the additional
request parameters. The predefined fields are:
Host:
Host, Host is
unified with the host-name. If Host is of the format <host>:<port>
Host only describes <host> and a field port(Port)
where
Port is an integer is added.get
, put
or post
.
This field is present if the header has been parsed successfully.ip(A,B,C,D)
containing the IP
address of the contacting host.host
for details.?
,
normally used to transfer data from HTML forms that use the `GET
'
protocol. In the URL it consists of a www-form-encoded list of Name=Value
pairs. This is mapped to a list of Prolog Name=Value
terms with decoded names and values. This field is only present if the
location contains a search-specification.HTTP/
Major.Minor
version indicator this element indicate the HTTP version of the peer.
Otherwise this field is not present.Cookie
line, the value of the
cookie is broken down in Name=Value pairs, where
the
Name is the lowercase version of the cookie name as used for
the HTTP fields.SetCookie
line, the cookie field
is broken down into the Name of the cookie, the Value
and a list of Name=Value pairs for additional
options such as expire
, path
, domain
or secure
.
If the first line of the request is tagged with
HTTP/
Major.Minor, http_read_request/2
reads all input upto the first blank line. This header consists of
Name:Value fields. Each such field appears as a
term
Name(Value)
in the Request, where Name
is canonicalised for use with Prolog. Canonisation implies that the
Name is converted to lower case and all occurrences of the
are replaced by -
_
. The value
for the
Content-length
fields is translated into an integer.
Here is an example:
?- http_read_request(user, X). |: GET /mydb?class=person HTTP/1.0 |: Host: gollem |: X = [ input(user), method(get), search([ class = person ]), path('/mydb'), http_version(1-0), host(gollem) ].
Where the HTTP GET
operation is intended to get a
document, using a path and possibly some additional search
information, the POST
operation is intended to hand
potentially large amounts of data to the server for processing.
The Request parameter above contains the term method(post)
.
The data posted is left on the input stream that is available through
the term input(Stream)
from the Request header.
This data can be read using http_read_data/3
from the HTTP client library. Here is a demo implementation simply
returning the parsed posted data as plain text (assuming pp/1
pretty-prints the data).
reply(Request) :- member(method(post), Request), !, http_read_data(Request, Data, []), format('Content-type: text/plain~n~n', []), pp(Data).
If the POST is initiated from a browser, content-type is generally
either application/x-www-form-urlencoded
or
multipart/form-data
. The latter is broken down
automatically if the plug-in library(http/http_mime_plugin)
is loaded.
The functionality of the server should be defined in one Prolog file (of course this file is allowed to load other files). Depending on the wanted server setup this `body' is wrapped into a small Prolog file combining the body with the appropriate server interface. There are three supported server-setups. For most applications we advice the multi-threaded server. Examples of this server architecture are the PlDoc documentation system and the SeRQL Semantic Web server infrastructure.
All the server setups may be wrapped in a reverse proxy to make them available from the public web-server as described in section 3.12.7.
library(thread_httpd)
for a multi-threaded
serverThis server is harder to debug due to the involved threading, although the GUI tracer provides reasonable support for multi-threaded applications using the tspy/1 command. It can provide fast communication to multiple clients and can be used for more demanding servers.
library(inetd_httpd)
for server-per-clientThis server is very hard to debug as the server is not connected to the user environment. It provides a robust implementation for servers that can be started quickly.
All the server interfaces provide http_server(:Goal, +Options)
to create the server. The list of options differ, but the servers share
common options:
The library(http/thread_httpd.pl)
provides the
infrastructure to manage multiple clients using a pool of worker-threads.
This realises a popular server design, also seen in Java Tomcat and
Microsoft .NET. As a single persistent server process maintains
communication to all clients startup time is not an important issue and
the server can easily maintain state-information for all clients.
In addition to the functionality provided by the inetd server, the
threaded server can also be used to realise an HTTPS server exploiting
the library(ssl)
library. See option ssl(+SSLOptions)
below.
port(?Port)
option to specify the port the server should listen to. If Port
is unbound an arbitrary free port is selected and Port is
unified to this port-number. The server consists of a small Prolog
thread accepting new connection on Port and dispatching these
to a pool of workers. Defined Options are:
infinite
,
making each worker wait forever for a request to complete. Without a
timeout, a worker may wait forever on an a client that doesn't complete
its request.https://
protocol. SSL
allows for encrypted communication to avoid others from tapping the wire
as well as improved authentication of client and server. The SSLOptions
option list is passed to ssl_init/3.
The port option of the main option list is forwarded to the SSL layer.
See the library(ssl)
library for details.
This can be used to tune the number of workers for performance. Another possible application is to reduce the pool to one worker to facilitate easier debugging.
pool(Pool)
or to thread_create/3
of the pool option is not present. If the dispatch module is used (see section
3.2), spawning is normally specified as an option to the http_handler/3
registration.
We recomment the use of thread pools. They allow registration of a set of threads using common characteristics, specify how many can be active and what to do if all threads are active. A typical application may define a small pool of threads with large stacks for computation intensive tasks, and a large pool of threads with small stacks to serve media. The declaration could be the one below, allowing for max 3 concurrent solvers and a maximum backlog of 5 and 30 tasks creating image thumbnails.
:- use_module(library(thread_pool)). :- thread_pool_create(compute, 3, [ local(20000), global(100000), trail(50000), backlog(5) ]). :- thread_pool_create(media, 30, [ local(100), global(100), trail(100), backlog(100) ]). :- http_handler('/solve', solve, [spawn(compute)]). :- http_handler('/thumbnail', thumbnail, [spawn(media)]).
This module provides the logic that is needed to integrate a process into the Unix service (daemon) architecture. It deals with the following aspects, all of which may be used/ignored and configured using commandline options:
The typical use scenario is to write a file that loads the following components:
In the code below, load
loads the remainder of the
webserver code.
:- use_module(library(http/http_unix_daemon)). :- initialization http_daemon. :- [load].
Now, the server may be started using the command below. See http_daemon/0 for supported options.
% [sudo] swipl -s mainfile.pl -- [option ...]
Below are some examples. Our first example is completely silent,
running on port 80 as user www
.
% swipl -s mainfile.pl -- --user=www --pidfile=/var/run/http.pid
Our second example logs HTTP interaction with the syslog daemon for
debugging purposes. Note that the argument to --debug
= is a
Prolog term and must often be escaped to avoid misinterpretation by the
Unix shell. The debug option can be repeated to log multiple debug
topics.
% swipl -s mainfile.pl -- --user=www --pidfile=/var/run/http.pid \ --debug='http(request)' --syslog=http
Broadcasting The library uses broadcast/1 to allow hooking certain events:
--user=User
to open ports below 1000. The default port is 80.
--ip=localhost
to restrict access to connections from
localhost if the server itself is behind an (Apache) proxy server
running on the same host.
--user
. If omitted, the login
group of the target user is used.
--no-fork
or --fork=false
, the
process runs in the foreground.
true
(default false
) implies --no-fork
and presents the Prolog toplevel after starting the server.
Other options are converted by argv_options/3 and passed to http_server/1. For example, this allows for:
http_server(Handler, Options)
. The default is
provided by start_server/1.
All modern Unix systems handle a large number of the services they
run through the super-server inetd. This program reads
/etc/inetd.conf
and opens server-sockets on all ports
defined in this file. As a request comes in it accepts it and starts the
associated server such that standard I/O refers to the socket. This
approach has several advantages:
The very small generic script for handling inetd based connections is
in inetd_httpd
, defining http_server/1:
Here is the example from demo_inetd
#!/usr/bin/pl -t main -q -f :- use_module(demo_body). :- use_module(inetd_httpd). main :- http_server(reply).
With the above file installed in /home/jan/plhttp/demo_inetd
,
the following line in /etc/inetd
enables the server at port
4001 guarded by tcpwrappers. After modifying inetd, send the
daemon the HUP
signal to make it reload its configuration.
For more information, please check inetd.conf(5).
4001 stream tcp nowait nobody /usr/sbin/tcpd /home/jan/plhttp/demo_inetd
There are rumours that inetd has been ported to Windows.
To be done.
There are three options for public deployment of a service. One is to run it on a dedicated machine on port 80, the standard HTTP port. The machine may be a virtual machine running ---for example--- under VMWARE or XEN. The (virtual) machine approach isolates security threads and allows for using a standard port. The server can also be hosted on a non-standard port such as 8000, or 8080. Using non-standard ports however may cause problems with intermediate proxy- and/or firewall policies. Isolation can be achieved using a Unix chroot environment. Another option, also recommended for Tomcat servers, is the use of Apache reverse proxies. This causes the main web-server to relay requests below a given URL location to our Prolog based server. This approach has several advantages:
Note that the proxy technology can be combined with isolation methods such as dedicated machines, virtual machines and chroot jails. The proxy can also provide load balancing.
Setting up a reverse proxy
The Apache reverse proxy setup is really simple. Ensure the modules
proxy
and proxy_http
are loaded. Then add two
simple rules to the server configuration. Below is an example that makes
a PlDoc server on port 4000 available from the main Apache server at
port 80.
ProxyPass /pldoc/ http://localhost:4000/pldoc/ ProxyPassReverse /pldoc/ http://localhost:4000/pldoc/
Apache rewrites the HTTP headers passing by, but using the above
rules it does not examine the content. This implies that URLs embedded
in the (HTML) content must use relative addressing. If the locations on
the public and Prolog server are the same (as in the example above) it
is allowed to use absolute locations. I.e. /pldoc/search
is
ok, but http://myhost.com:4000/pldoc/search
is not.
If the locations on the server differ, locations must be relative (i.e. not
start with
.
/
This problem can also be solved using the contributed Apache module
proxy_html
that can be instructed to rewrite URLs embedded
in HTML documents. In our experience, this is not troublefree as URLs
can appear in many places in generated documents. JavaScript can create
URLs on the fly, which makes rewriting virtually impossible.
The body is called by the module library(http/http_wrapper.pl)
.
This module realises the communication between the I/O streams and the
body described in section 3.1. The
interface is realised by
http_wrapper/5:
'Keep-alive'
if both ends of the connection want to
continue the connection or close
if either side wishes to
close the connection.
This predicate reads an HTTP request-header from In,
redirects current output to a memory file and then runs call(Goal,
Request)
, watching for exceptions and failure. If Goal
executes successfully it generates a complete reply from the created
output. Otherwise it generates an HTTP server error with additional
context information derived from the exception.
http_wrapper/5 supports the following options:
..., format('Set-Cookie: ~w=~w; path=~w~n', [Cookie, SessionID, Path]), ...,
If ---for whatever reason--- the conversion is not possible it simply unifies RelPath to AbsPath.
This library finds the public address of the running server. This can
be used to construct URLs that are visible from anywhere on the
internet. This module was introduced to deal with OpenID, where a reques
is redirected to the OpenID server, which in turn redirects to our
server (see http_openid.pl
).
The address is established from the settings http:public_host and http:public_port if provided. Otherwise it is deduced from the request.
true
(default false
), try to replace a
local hostname by a world-wide accessible name.
This predicate performs the following steps to find the host and port:
http:public_host
and http:public_port
X-Forwarded-Host
header, which applies if this
server runs behind a proxy.
Host
header, which applies for HTTP 1.1 if we
are contacted directly.
Request | is the current request. If it is left unbound, and the request is needed, it is obtained with http_current_request/1. |
Simple module for logging HTTP requests to a file. Logging is enabled
by loading this file and ensure the setting http:logfile is not the
empty atom. The default file for writing the log is httpd.log
.
See
library(settings)
for details.
The level of logging can modified using the multifile predicate
http_log:nolog/1 to hide HTTP request
fields from the logfile and
http_log:password_field/1 to hide
passwords from HTTP search specifications (e.g. /topsecret?password=secret
).
append
mode if the file is not yet
open. The log file is determined from the setting http:logfile
.
If this setting is set to the empty atom (''), this predicate fails.
If a file error is encountered, this is reported using print_message/2, after which this predicate silently fails.
server(Reason, Time)
.
to the logfile. This call is intended for cooperation with the Unix
logrotate facility using the following schema:
The library library(http/http_error)
defines a hook that
decorates uncaught exceptions with a stack-trace. This will generate a 500
internal server error document with a stack-trace. To enable this
feature, simply load this library. Please do note that providing error
information to the user simplifies the job of a hacker trying to
compromise your server. It is therefore not recommended to load this
file by default.
The example program calc.pl
has the error handler loaded
which can be triggered by forcing a divide-by-zero in the calculator.
The library library(http/http_header)
provides
primitives for parsing and composing HTTP headers. Its functionality is
normally hidden by the other parts of the HTTP server and client
libraries. We provide a brief overview of http_reply/3
which can be accessed from the reply body using an exception as explain
in section 3.1.1.
Field(Value)
. Type
is one of:
library(http/html_write)
described in section
3.18.File(+MimeType, +Path)
, but do not include a
modification time header.stream(+Stream, +Len)
, but the data on Stream
must contain an HTTP header.library(http/html_write)
libraryProducing output for the web in the form of an HTML document is a requirement for many Prolog programs. Just using format/2 is not satisfactory as it leads to poorly readable programs generating poor HTML. This library is based on using DCG rules.
The library(http/html_write)
structures the generation
of HTML from a program. It is an extensible library, providing a DCG
framework for generating legal HTML under (Prolog) program control. It
is especially useful for the generation of structured pages (e.g. tables)
from Prolog data structures.
The normal way to use this library is through the DCG html/3. This non-terminal provides the central translation from a structured term with embedded calls to additional translation rules to a list of atoms that can then be printed using print_html/[1,2].
//
[]
\
List
\
Term
\
Term but allows for invoking grammar rules in
external packages.
&<Entity>;
or &#<Entity>;
if Entity is an integer. SWI-Prolog atoms and strings are
represented as Unicode. Explicit use of this construct is rarely needed
because code-points that are not supported by the output encoding are
automatically converted into character-entities.
Tag(Content)
Tag(Attributes, Content)
Name(Value)
or
Name=Value. Value is the atomic
attribute value but allows for a limited functional notation:
encode(Atom)
location_by_id(ID)
Name(Value)
. Values are encoded as in the encode option
described above.
NAMES
). Each value
in list is separated by a space. This is particularly useful for setting
multiple class
attributes on an element. For example:
... span(class([c1,c2]), ...),
The example below generates a URL that references the predicate
set_lang/1
in the application with given parameters. The http_handler/3
declaration binds /setlang
to the predicate set_lang/1
for which we provide a very simple implementation. The code between ...
is part of an HTML page showing the english flag which, when pressed,
calls set_lang(Request)
where Request contains
the search parameter lang
= en
. Note that the
HTTP location (path) /setlang
can be moved without
affecting this code.
:- http_handler('/setlang', set_lang, []). set_lang(Request) :- http_parameters(Request, [ lang(Lang, []) ]), http_session_retractall(lang(_)), http_session_assert(lang(Lang)), reply_html_page(title('Switched language'), p(['Switch language to ', Lang])). ... html(a(href(location_by_id(set_lang) + [lang(en)]), img(src('/www/images/flags/en.png')))), ...
//
DOCTYPE
declaration. HeadContent are elements to
be placed in the head
element and BodyContent
are elements to be placed in the body
element.
To achieve common style (background, page header and footer), it is
possible to define DCG non-terminals head/3
and/or body/3.
Non-terminal page/3
checks for the definition of these non-terminals in the module it is
called from as well as in the user
module. If no definition
is found, it creates a head with only the HeadContent (note
that the
title
is obligatory) and a body
with bgcolor
set to white
and the provided BodyContent.
Note that further customisation is easily achieved using html/3 directly as page/4 is (besides handling the hooks) defined as:
page(Head, Body) --> html([ \['<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 4.0//EN">\n'], html([ head(Head), body(bgcolor(white), Body) ]) ]).
//
DOCTYPE
and the HTML
element. Contents is used to generate both the head and body
of the page.//
html_begin(table) html_begin(table(border(2), align(center)))
This predicate provides an alternative to using the
\
Command syntax in the html/3
specification. The following two fragments are the same. The preferred
solution depends on your preferences as well as whether the
specification is generated or entered by the programmer.
table(Rows) --> html(table([border(1), align(center), width('80%')], [ \table_header, \table_rows(Rows) ])). % or table(Rows) --> html_begin(table(border(1), align(center), width('80%'))), table_header, table_rows, html_end(table).
//
The non-terminal html/3
translates a specification into a list of atoms and layout instructions.
Currently the layout instructions are terms of the format nl(N)
,
requesting at least N newlines. Multiple consecutive nl(1)
terms are combined to an atom containing the maximum of the requested
number of newline characters.
To simplify handing the data to a client or storing it into a file, the following predicates are available from this library:
reply_html_page(default, Head, Body)
.library(http_wrapper)
(CGI-style). Here is a simple typical example:
reply(Request) :- reply_html_page(title('Welcome'), [ h1('Welcome'), p('Welcome to our ...') ]).
The header and footer of the page can be hooked using the
grammar-rules user:head//2 and user:body//2. The first argument passed
to these hooks is the Style argument of reply_html_page/3
and the second is the 2nd (for head/4)
or 3rd (for body/4)
argument of reply_html_page/3.
These hooks can be used to restyle the page, typically by embedding the
real body content in a div
. E.g., the following code
provides a menu on top of each page of that is identified using the
style
myapp.
:- multifile user:body//2. user:body(myapp, Body) --> html(body([ div(id(top), \application_menu), div(id(content), Body) ])).
Redefining the head
can be used to pull in scripts, but
typically html_requires/3
provides a more modular approach for pulling scripts and CSS-files.
Content-length
field of an HTTP reply-header.
Modern HTML commonly uses CSS and Javascript. This requires <link> elements in the HTML <head> element or <script> elements in the <body>. Unfortunately this seriously harms re-using HTML DCG rules as components as each of these components may rely on their own style sheets or JavaScript code. We added a `mailing' system to reposition and collect fragments of HTML. This is implemented by html_post/4, html_receive/3 and html_receive/4.
//
\
-commands are executed by mailman/1
from print_html/1 or html_print_length/2.
These commands are called in the calling context of the html_post/4
call.
A typical usage scenario is to get required CSS links in the document head in a reusable fashion. First, we define css/3 as:
css(URL) --> html_post(css, link([ type('text/css'), rel('stylesheet'), href(URL) ])).
Next we insert the unique CSS links, in the pagehead using the following call to reply_html_page/2:
reply_html_page([ title(...), \html_receive(css) ], ...)
//
//
Typically, Handler collects the posted terms, creating a term suitable for html/3 and finally calls html/3.
The library predefines the receiver channel head
at the
end of the
head
element for all pages that write the html head
through this library. The following code can be used anywhere inside an
HTML generating rule to demand a javascript in the header:
js_script(URL) --> html_post(head, script([ src(URL), type('text/javascript') ], [])).
This mechanism is also exploited to add XML namespace (xmlns
)
declarations to the (outer) html
element using xhml_ns/4:
//
xmlns
channel. Rdfa (http://www.w3.org/2006/07/SWD/RDFa/syntax/),
embedding RDF in (x)html provides a typical usage scenario where we want
to publish the required namespaces in the header. We can define:
rdf_ns(Id) --> { rdf_global_id(Id:'', Value) }, xhtml_ns(Id, Value).
After which we can use rdf_ns/3 as a
normal rule in html/3 to publish
namespaces from library(semweb/rdf_db)
. Note that this
macro only has effect if the dialect is set to xhtml
. In
html
mode it is silently ignored.
The required xmlns
receiver is installed by html_begin/3
using the html
tag and thus is present in any document that
opens the outer html
environment through this library.
In some cases it is practical to extend the translations imposed by
html/3. We used
this technique to define translation rules for the output of the
SWI-Prolog library(sgml)
package.
The html/3 non-terminal first calls the multifile ruleset html_write:expand//1.
//
//
<&>
.//
<&>"
.
Though not strictly necessary, the library attempts to generate reasonable layout in SGML output. It does this only by inserting newlines before and after tags. It does this on the basis of the multifile predicate html_write:layout/3
-
,
requesting the output generator to omit the close-tag altogether or empty
,
telling the library that the element has declared empty content. In this
case the close-tag is not emitted either, but in addition html/3
interprets Arg in Tag(Arg)
as a list of
attributes rather than the content.
A tag that does not appear in this table is emitted without additional layout. See also print_html/[1,2]. Please consult the library source for examples.
In the following example we will generate a table of Prolog predicates we find from the SWI-Prolog help system based on a keyword. The primary database is defined by the predicate predicate/5 We will make hyperlinks for the predicates pointing to their documentation.
html_apropos(Kwd) :- findall(Pred, apropos_predicate(Kwd, Pred), Matches), phrase(apropos_page(Kwd, Matches), Tokens), print_html(Tokens). % emit page with title, header and table of matches apropos_page(Kwd, Matches) --> page([ title(['Predicates for ', Kwd]) ], [ h2(align(center), ['Predicates for ', Kwd]), table([ align(center), border(1), width('80%') ], [ tr([ th('Predicate'), th('Summary') ]) | \apropos_rows(Matches) ]) ]). % emit the rows for the body of the table. apropos_rows([]) --> []. apropos_rows([pred(Name, Arity, Summary)|T]) --> html([ tr([ td(\predref(Name/Arity)), td(em(Summary)) ]) ]), apropos_rows(T). % predref(Name/Arity) % % Emit Name/Arity as a hyperlink to % % /cgi-bin/plman?name=Name&arity=Arity % % we must do form-encoding for the name as it may contain illegal % characters. www_form_encode/2 is defined in library(url). predref(Name/Arity) --> { www_form_encode(Name, Encoded), sformat(Href, '/cgi-bin/plman?name=~w&arity=~w', [Encoded, Arity]) }, html(a(href(Href), [Name, /, Arity])). % Find predicates from a keyword. '$apropos_match' is an internal % undocumented predicate. apropos_predicate(Pattern, pred(Name, Arity, Summary)) :- predicate(Name, Arity, Summary, _, _), ( '$apropos_match'(Pattern, Name) -> true ; '$apropos_match'(Pattern, Summary) ).
library(http/html_write)
libraryThis library is the result of various attempts to reach at a more satisfactory and Prolog-minded way to produce HTML text from a program. We have been using Prolog for the generation of web pages in a number of projects. Just using format/2 never was not a real option, generating error-prone HTML from clumsy syntax. We started with a layer on top of format/2, keeping track of the current nesting and thus always capable of properly closing the environment.
DCG based translation however, naturally exploits Prolog's term-rewriting primitives. If generation fails for whatever reason it is easy to produce an alternative document (for example holding an error message).
In a future version we will probably define a goal_expansion/2
to do compile-time optimisation of the library. Quotation of known text
and invocation of sub-rules using the \
RuleSet
and
<Module>:<RuleSet> operators are costly
operations in the analysis that can be done at compile-time.
This library is a supplement to library(http/html_write)
for producing JavaScript fragments. Its main role is to be able to call
JavaScript functions with valid arguments constructed from Prolog data.
For example, suppose you want to call a JavaScript functions to process
a list of names represented as Prolog atoms. This can be done using the
call below, while without this library you would have to be careful to
properly escape special characters.
numbers_script(Names) --> html(script(type('text/javascript'), [ \js_call('ProcessNumbers'(Names) ]),
The accepted arguments are described with js_expression/3.
//
script
element with the given
content.+
operator, which results in concatenation at the client side.
..., js_script({|javascript(Id, Config)|| $(document).ready(function() { $("#"+Id).tagit(Config); }); |}), ...
The current implementation tokenizes the JavaScript input and yields syntax errors on unterminated comments, strings, etc. No further parsing is implemented, which makes it possible to produce syntactically incorrect and partial JavaScript. Future versions are likely to include a full parser, generating syntax errors.
The parser produces a term \List
, which is suitable for
js_script/3 and html/3.
Embedded variables are mapped to
\js_expression(Var)
, while the remaining text is mapped to
atoms.
//
... html(script(type('text/javascript'), [ \js_call('x.y.z'(hello, 42) ]),
//
['var ', Id, ' = new ', \js_call(Term)]
//
//
null
object(Attributes)
object(Attributes)
, providing a more
JavaScript-like syntax. This may be useful if the object appears
literally in the source-code, but is generally less friendlyto produce
as a result from a computation.
json(Term)
true
, false
and null
, but can also be use for emitting JavaScript
symbols (i.e. function- or variable names).
symbol(Atom)
//
This module provides an abstract specification of HTTP server locations that is inspired on absolute_file_name/3. The specification is done by adding rules to the dynamic multifile predicate http:location/3. The speficiation is very similar to user:file_search_path/2, but takes an additional argument with options. Currently only one option is defined:
The default priority is 0. Note however that notably libraries may decide to provide a fall-back using a negative priority. We suggest -100 for such cases.
This library predefines three locations at priority -100: The icons
and css
aliases are intended for images and css files and
are backed up by file a file-search-path that allows finding the icons
and css files that belong to the server infrastructure (e.g., http_dirindex/2).
http:prefix
Here is an example that binds /login
to login/1.
The user can reuse this application while moving all locations using a
new rule for the admin location with the option [priority(10)]
.
:- multifile http:location/3. :- dynamic http:location/3. http:location(admin, /, []). :- http_handler(admin(login), login, []). login(Request) :- ...
/
. Options
currently only supports the priority of the path. If http:location/3
returns multiple solutions the one with the highest priority is
selected. The default priority is 0.
This library provides a default for the abstract location
root
. This defaults to the setting http:prefix or, when not
available to the path /
. It is adviced to define all
locations (ultimately) relative to root
. For example, use
root('home.html')
rather than '/home.html'
.
http://
) URI
for the abstract specification Spec. Use http_absolute_location/3
to create references to locations on the same server.
This library allows for abstract declaration of available CSS and
Javascript resources and their dependencies using html_resource/2.
Based on these declarations, html generating code can declare that it
depends on specific CSS or Javascript functionality, after which this
library ensures that the proper links appear in the HTML head. The
implementation is based on mail system implemented by html_post/2
of library html_write.pl
.
Declarations come in two forms. First of all http locations are
declared using the http_path.pl
library. Second, html_resource/2
specifies HTML resources to be used in the head
and their
dependencies. Resources are currently limited to Javascript files (.js)
and style sheets (.css). It is trivial to add support for other material
in the head. See
html_include/3.
For usage in HTML generation, there is the DCG rule html_requires/3 that demands named resources in the HTML head.
All calls to html_requires/3 for the page are collected and duplicates are removed. Next, the following steps are taken:
Use ?-
debug(html(script))
. to see the
requested and final set of resources. All declared resources are in html_resource/3.
The edit/1 command recognises the names of
HTML resources.
true
(default false
), do not include About
itself, but only its dependencies. This allows for defining an alias for
one or more resources.
Registering the same About multiple times extends the properties defined for About. In particular, this allows for adding additional dependencies to a (virtual) resource.
//
head
using html_post/2.
The actual dependencies are computed during the HTML output phase by
html_insert_resource/3.
This module provides convience predicates to include PWP (Prolog Well-formed Pages) in a Prolog web-server. It provides the following predicates:
pwp_handler()
/
2reply_pwp_page()
/
3/web/pwp/
.
user:file_search_path(pwp, '/web/pwp'). :- http_handler(root(.), pwp_handler([path_alias(pwp)]), [prefix]).
Options include:
index.pwp
.
true
(default is false
), allow for
?view=source to serve PWP file as source.
Options supported are:
true
, (default false
), process the PWP file
in a module constructed from its canonical absolute path. Otherwise, the
PWP file is processed in the calling module.
Initial context:
get
, post
, put
or head
While processing the script, the file-search-path pwp includes the current location of the script. I.e., the following will find myprolog in the same directory as where the PWP file resides.
pwp:ask="ensure_loaded(pwp(myprolog))"
The
HTTP protocol provides for transfer encodings. These define
filters applied to the data described by the Content-type
.
The two most popular transfer encodings are chunked
and
deflate
. The chunked
encoding avoids the need
for a Content-length
header, sending the data in chunks,
each of which is preceded by a length. The deflate
encoding
provides compression.
Transfer-encodings are supported by filters defined as foreign
libraries that realise an encoding/decoding stream on top of another
stream. Currently there are two such libraries: library(http/http_chunked.pl)
and library(zlib.pl)
.
There is an emerging hook interface dealing with transfer encodings.
The
library(http/http_chunked.pl)
provides a hook used by
library(http/http_open.pl)
to support chunked encoding in http_open/3.
Note that both http_open.pl
and http_chunked.pl
must be loaded for http_open/3
to support chunked encoding.
library(http/http_chunked)
libraryWebSocket is a lightweight message oriented protocol on top of TCP/IP streams. It is typically used as an upgrade of an HTTP connection to provide bi-directional communication, but can also be used in isolation over arbitrary (Prolog) streams.
The SWI-Prolog interface is based on streams and provides ws_open/3 to create a websocket stream from any Prolog stream. Typically, both an input and output stream are wrapped and then combined into a single object using stream_pair/3.
The high-level interface provides http_upgrade_to_websocket/3 to realise a websocket inside the HTTP server infrastructure and http_open_websocket/3 as a layer over http_open/3 to realise a client connection. After establishing a connection, ws_send/2 and ws_receive/2 can be used to send and receive messages. The predicate ws_close/2 is provided to perform the closing handshake and dispose of the stream objects.
subprotocol(Protocol)
.
The following example exchanges a message with the html5rocks.websocket.org echo service:
?- URL = 'ws://html5rocks.websocket.org/echo', http_open_websocket(URL, WS, []), ws_send(WS, text('Hello World!')), ws_receive(WS, Reply), ws_close(WS, 1000, "Goodbye"). URL = 'ws://html5rocks.websocket.org/echo', WS = <stream>(0xe4a440,0xe4a610), Reply = websocket{data:"Hello World!", opcode:text}.
WebSocket | is a stream pair (see stream_pair/3) |
call(Goal, WebSocket)
,
where WebSocket is a socket-pair. Options:
true
(default), guard the execution of Goal
and close the websocket on both normal and abnormal termination of Goal.
If false
, Goal itself is responsible for the
created websocket. This can be used to create a single thread that
manages multiple websockets using I/O multiplexing.
infinite
.
Note that the Request argument is the last for cooperation with http_handler/3. A simple echo server that can be accessed at =/ws/= can be implemented as:
:- use_module(library(http/websocket)). :- use_module(library(http/thread_httpd)). :- use_module(library(http/http_dispatch)). :- http_handler(root(ws), http_upgrade_to_websocket(echo, []), [spawn([])]). echo(WebSocket) :- ws_receive(WebSocket, Message), ( Message.opcode == close -> true ; ws_send(WebSocket, Message), echo(WebSocket) ).
text(+Text)
, but all character codes produced by Content
must be in the range [0..255]. Typically, Content will be an
atom or string holding binary data.
text(+Text)
, provided for consistency.
opcode
key. Other keys
used are:
format()
:
Formatstring
, prolog
or json
.
See ws_receive/3.
data()
:
TermNote that ws_start_message/3 does not unlock the stream. This is done by ws_send/1. This implies that multiple threads can use ws_send/2 and the messages are properly serialized.
opcode()
:
OpCodeclose
and data to the atom
end_of_file
.
data()
:
Stringrsv()
:
RSVIf ping
message is received and WebSocket is
a stream pair,
ws_receive/1 replies with a pong
and waits for the next message.
The predicate ws_receive/3 processes the following options:
close
message if this was not already sent and wait for the close reply.
Code | is the numerical code indicating the close status. This is 16-bit integer. The codes are defined in section 7.4.1. Defined Status Codes of RFC6455. Notably, 1000 indicates a normal closure. |
Data | is currently interpreted as text. |
server
or client
. If client
,
messages are sent as masked.
true
(default), closing WSStream also closes Stream.
subprotocols
option of http_open_websocket/3
and
http_upgrade_to_websocket/3.
A typical sequence to turn a pair of streams into a WebSocket is here:
..., Options = [mode(server), subprotocol(chat)], ws_open(Input, WsInput, Options), ws_open(Output, WsOutput, Options), stream_pair(WebSocket, WsInput, WsOutput).
This library manages a hub that consists of clients that are connected using a websocket. Messages arriving at any of the websockets are sent to the event queue of the hub. In addition, the hub provides a broadcast interface. A typical usage scenario for a hub is a chat server A scenario for realizing an chat server is:
The thread(s)
can talk to clients using two predicates:
A hub consists of (currenty) four message queues and a simple dynamic fact. Threads that are needed for the communication tasks are created on demand and die if no more work needs to be done.
.
name()
queues()
.
event()
thread(s)
can listen.
After creating a hub, the application normally creates a thread that listens to Hub.queues.event and exposes some mechanisms to establish websockets and add them to the hub using hub_add/3.
Message | is either a single message (as accepted by ws_send/2) or a list of such messages. |
From http://json.org, " JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition - December 1999. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language."
JSON is interesting to Prolog because using AJAX web technology we can easily created web-enabled user interfaces where we implement the server side using the SWI-Prolog HTTP services provided by this package. The interface consists of three libraries:
library(http/json)
provides support for the core JSON
object serialization.
library(http/json_convert)
converts between the primary
representation of JSON terms in Prolog and more application oriented
Prolog terms. E.g. point(X,Y)
vs. object([x=X,y=Y])
.
library(http/http_json)
hooks the conversion libraries
into the HTTP client and server libraries.
This module supports reading and writing JSON objects. This library supports two Prolog representations (the new representation is only supported in SWI-Prolog version 7 and later):
json(NameValueList)
, a JSON string as an
atom and the JSON constants null
, true
and
false
as @(null), @(true) and @false.
null
, true
and false
.
atom
,
string
or codes
.
json(NameValueList)
,
where NameValueList is a list of Name=Value. Name is an atom created
from the JSON string.
true
and false
are
mapped -like JPL- to @(true) and @(false).
null
is mapped to the Prolog term
@(null)
Here is a complete example in JSON and its corresponding Prolog term.
{ "name":"Demo term", "created": { "day":null, "month":"December", "year":2007 }, "confirmed":true, "members":[1,2,3] }
json([ name='Demo term', created=json([day= @null, month='December', year=2007]), confirmed= @true, members=[1, 2, 3] ])
The following options are processed:
null
. Default
@(null)
true
. Default
@(true)
false
. Default
@(false)
atom
.
The alternative is string
, producing a packed string
object. Please note that codes
or
chars
would produce ambiguous output and is therefore not
supported.
If json_read/3 encounters end-of-file before any real data it binds Term to the term @(end_of_file).
Values can be of the form #(Term), which causes Term to be stringified if it is not an atom or string. Stringification is based on term_string/2.
The version 7 dict type is supported as well. If the dicts has
a tag, a property "type":"tag" is added to the object. This
behaviour can be changed using the tag
option (see below).
For example:
?- json_write(current_output, point{x:1,y:2}). { "type":"point", "x":1, "y":2 }
In addition to the options recognised by json_read/3, we process the following options are recognised:
true
(default false
), serialize unknown
terms and print them as a JSON string. The default raises a type error.
Note that this option only makes sense if you can guarantee that the
passed value is not an otherwise valid Prolog reporesentation of a
Prolog term.
If a string is emitted, the sequence </
is emitted as
<\/
. This is valid JSON syntax which ensures that JSON
objects can be safely embedded into an HTML <script>
element.
true
, false
and null
constants.
true
, false
and null
are
represented using these Prolog atoms.
type
field in an object assigns a tag for
the dict.
On addition to the options processed by json_read/3, json_read_dict/3 processes this additional option:
atom
,
string
or codes
.
The idea behind this module is to provide a flexible high-level
mapping between Prolog terms as you would like to see them in your
application and the standard representation of a JSON object as a Prolog
term. For example, an X-Y point may be represented in JSON as {"x":25, "y":50}
.
Represented in Prolog this becomes json([x=25,y=50])
, but
this is a pretty non-natural representation from the Prolog point of
view.
This module allows for defining records (just like library(record)
)
that provide transparent two-way transformation between the two
representations.
:- json_object point(x:integer, y:integer).
This declaration causes prolog_to_json/2 to translate the native Prolog representation into a JSON Term:
?- prolog_to_json(point(25,50), X). X = json([x=25, y=50])
A json_object/1 declaration
can define multiple objects separated by a comma (,), similar to the dynamic/1
directive. Optionally, a declaration can be qualified using a module.
The conversion predicates
prolog_to_json/2 and json_to_prolog/2
first try a conversion associated with the calling module. If not
successful, they try conversions associated with the module user
.
JSON objects have no type. This can be solved by adding an
extra field to the JSON object, e.g. {"type":"point", "x":25, "y":50}
.
As Prolog records are typed by their functor we need some notation to
handle this gracefully. This is achieved by adding +Fields to the
declaration. I.e.
:- json_object point(x:integer, y:integer) + [type=point].
Using this declaration, the conversion becomes:
?- prolog_to_json(point(25,50), X). X = json([x=25, y=50, type=point])
The predicate json_to_prolog/2 is often used after http_read_json/2 and prolog_to_json/2 before reply_json/1. For now we consider them seperate predicates because the transformation may be too general, too slow or not needed for dedicated applications. Using a seperate step also simplifies debugging this rather complicated process.
f(Name, Type, Default, Var)
,
ordered by Name. Var is the corresponding variable in Term.library(record)
. E.g.
?- json_object point(x:int, y:int, z:int=0).
The type arguments are either types as know to library(error)
or functor names of other JSON objects. The constant any
indicates an untyped argument. If this is a JSON term, it becomes
subject to json_to_prolog/2.
I.e., using the type
list(any)
causes the conversion to be executed on each
element of the list.
If a field has a default, the default is used if the field is not
specified in the JSON object. Extending the record type definition,
types can be of the form (Type1|
Type2). The type
null
means that the field may not be present.
Conversion of JSON to Prolog applies if all non-defaulted arguments can be found in the JSON object. If multiple rules match, the term with the highest arity gets preference.
:-
json_object/1
declarations. If a json_object/1
declaration declares a field of type
boolean
, commonly used thruth-values in Prolog are
converted to JSON booleans. Boolean translation accepts one of true
,
on
, 1
, @true, false
, fail
, off
or 0
, @false.
:-
json_object/1
declarations. An efficient transformation is non-trivial, but we rely on
the assumption that, although the order of fields in JSON
terms is irrelevant and can therefore vary a lot, practical applications
will normally generate the JSON objects in a consistent
order.
If a field in a json_object is declared of type boolean
,
@true and @false are translated to true
or false
,
the most commonly used Prolog representation for truth-values.
This module inserts the JSON parser for documents of MIME type
application/jsonrequest
and application/json
requested through the http_client.pl
library.
Typically JSON is used by Prolog HTTP servers. This module supports two JSON representations: the classical representation and the new representation supported by the SWI-Prolog version 7 extended data types. Below is a skeleton for handling a JSON request, answering in JSON using the classical interface.
handle(Request) :- http_read_json(Request, JSONIn), json_to_prolog(JSONIn, PrologIn), <compute>(PrologIn, PrologOut), % application body prolog_to_json(PrologOut, JSONOut), reply_json(JSONOut).
When using dicts, the conversion step is generally not needed and the code becomes:
handle(Request) :- http_read_json_dict(Request, DictIn), <compute>(DictIn, DictOut), reply_json(DictOut).
This module also integrates JSON support into the http client
provided by http_client.pl
. Posting a JSON query and
processing the JSON reply (or any other reply understood by http_read_data/3)
is as simple as below, where Term is a JSON term as described in json.pl
and reply is of the same format if the server replies with JSON.
..., http_post(URL, json(Term), Reply, [])
http_post(URL, json(Term), Reply, Options) http_post(URL, json(Term, Options), Reply, Options)
If Options are passed, these are handed to json_write/3. In addition, this option is processed:
Content-type
is application/json; charset=UTF8
. charset=UTF8
should not be required because JSON is defined to be UTF-8 encoded, but
some clients insist on it.
+
As One of term
(classical json representation) or dict
to use the new dict
representation. If omitted and Term is a dict, dict
is
assumed. SWI-Prolog Version 7.
Simple and partial implementation of MIME encoding. MIME is covered by RFC 2045. This library is used by e.g., http_post_data/3 when using the form_data(+ListOfData) input specification.
MIME decoding is now arranged through library(mime) from the clib package, based on the external librfc2045 library. Most likely the functionality of this package will be moved to the same library someday. Packing however is a lot simpler then parsing.
=
Valuefilename
is present if Value is of the form file(File).
Value may be any of remaining value specifications.
Content-Disposition: form-data; name="Name"[; filename="<File>"
Content-type
is derived from
the File using file_mime_type/2. If the
content-type is text/_
, the file data is copied in text
mode, which implies that it is read in the default encoding of the
system and written using the encoding of the Out stream.
Otherwise the file data is copied binary.
mime([type(text/html)], '<b>Hello World</b>', [])
Out | is a stream opened for writing. Typically, it should be opened in text mode using UTF-8 encoding. |
Writing servers is an inherently dangerous job that should be carried out with some considerations. You have basically started a program on a public terminal and invited strangers to use it. When using the interactive server or inetd based server the server runs under your privileges. Using CGI scripted it runs with the privileges of your web-server. Though it should not be possible to fatally compromise a Unix machine using user privileges, getting unconstrained access to the system is highly undesirable.
Symbolic languages have an additional handicap in their inherent possibilities to modify the running program and dynamically create goals (this also applies to the popular Perl and PHP scripting languages). Here are some guidelines.
/etc/passwd
, but also ../../../../../etc/passwd
are tried by hackers to learn about the system they want to attack. So,
expand provided names using absolute_file_name/[2,3]
and verify they are inside a folder reserved for the server. Avoid
symbolic links from this subtree to the outside world. The example below
checks validity of filenames. The first call ensures proper canonisation
of the paths to avoid an mismatch due to symbolic links or other
filesystem ambiguities.
check_file(File) :- absolute_file_name('/path/to/reserved/area', Reserved), absolute_file_name(File, Tried), sub_atom(Tried, 0, _, _, Reserved).
open(pipe(Command), ...)
, verify the argument once more.
Use
process_create/3
in preference over shell/1
as this function avoids stringification of arguments (Unix) or ensures
proper quoting of arguments (Windows).
reply(Query) :- member(search(Args), Query), member(action=Action, Query), member(arg=Arg, Query), call(Action, Arg). % NEVER EVER DO THIS!
All your attacker has to do is specify Action as shell
and Arg as /bin/sh
and he has an uncontrolled
shell!
/
).
This is
not a good idea. It is adviced to have all locations in a
server below a directory with an informative name. Consider to make the
root location something that can be changed using a global setting.
The SWI-Prolog HTTP library is in active use in a large number of projects. It is considered one of the SWI-Prolog core libraries that is actively maintained and regularly extended with new features. This is particularly true for the multi-threaded server. The inetd based server may be applicable for infrequent requests where the startup time is less relevant. The XPCE based server is considered obsolete.
This library is by no means complete and you are free to extend it.