You are viewing william_os4y

Sat, Apr. 14th, 2012, 01:00 pm
Fapws3-11 is out

Despite what said in February, I decide to release a new version of Fapws3 for Python-2.x.

Indeed, thanks to several testors (Blacknoir, Wigunawan) Fapws3 was crashing on some systems.
This was a nasty bug to solve, because I can't reproduce it on my machines.
Since it's sounds ok now, I've decided to still release a last :-) version for those willing to use it on Python-2.x lower than 2.7 (included).

Pypi: http://pypi.python.org/pypi/fapws3/0.11.dev
TGZ: https://github.com/william-os4y/fapws3/tarball/v0.11
Github: https://github.com/william-os4y/fapws3/tags


If you permit me, I'm using this communication email to also share the status of the current development's branch: python3.
Indeed, since February, I've still progressed, and one of the main added feature concern the possibility to upload files into a temporary file (no more in memory).
This close the issue of small system (low memory) having to deal with big uploads (bigger than the available memory).

I'm now focusing on testings and documentation.
Indeed, by reading the comments, here and there, on internet about Fapws3, some people are complaining about the poor documentation. I thought that the (big) list of samples would be enough, but ... sounds clear it's not clear as a real documentation.
This is a task on the list since a long time that I have to tackle. So, I'll do it for the Python3 release.
As said by Python developers, this Python3 code is perfectly running on Python2.7.
But the good news is that it has passed all tests on a NetBSD system having python2.6 :-).


All that said, I thanks all contributors having proposed interesting code, patches, tests results, recommendations, ...
(cfr README)

Sun, Feb. 12th, 2012, 12:41 pm
Fapws3-0.10 is out

Despite lot of progress on the python3 branch, I propose a new release
of Fapws3 for python2.x.

Indeed, since 1 year (date of the last stable release), lot of new
features have been added into Fapws :-).
And several are coming from, you, the community !!! Thus, I first
thank all contributors (code, tests, ...).
This is a great achievement.
More specifically, I will thank Stiletto, Liu Qishuai and Keith for
their active contributions.

To make it short, this release comes with the following new features:
- support of tuple for callback output.
- support of socket. Thanks to this, Fapws3 is serving webpages via a
socket instead of a port.
- better support for OSX

You can download the tarfile from the github website:
https://github.com/william-os4y/fapws3
Or directly via the following link:
https://github.com/william-os4y/fapws3/tarball/v0.10
Pypi users can grab it there too: pip install fapws3

As said in the mailinglist, the python3 version of Fapws3 is nearly
out (this release will be backward compatible with python2.7).
Thus I plan to have 2 parallel branches so that there is a need for
python2.5 or 2.6.
This will change in the following months, but currently Fapws3 for
python2.x is in the master branch.

Sat, Mar. 26th, 2011, 11:41 am
Python tests used to compare different OSes.

A bit like Jaime has did with Ruby (http://linbsd.org/), I've used the python test's scripts to compare different OSes.

All OSes are installed on the same machine (each is own partition) with default install parameters. All Hardware elements remains the same during the tests. The machine is a 4 years old PC running AMD CPU 2.4GHZ with 2GB Ram.

Tests were made with the latest Python2 release: 2.7.1.

To avoid binary packages glitches, I've downloaded and compiled Python myself on each OSes by executing the standard ./configure; make. Compiler, ... are the default one proposed by each OSes.

To get the time spent on each test, I've just add 1 line in the regretest.py script provided within the python tar file:
line 581: print test_times
In other words, at the end of the tests, this print a list of tuple: (time spent, test name).

I've tested it with NetBSD-5.1, FreeBSD 8.2, Ubuntu 10.10 and Archlinux. The Archlinux system has been fully updated just before the tests, I'll thus call it Arch 03/2011.

To be detailed, the command used after compilation was:
./python -Wd -3 -E -tt Lib/test/regrtest.py -ucpu

(This is was you will execute if you perfrom a "make test"). I've just focussed the tests on CPU set of tests.

The global results are:
------------------------

Netbsd 5.1 wins 64
FreeBSD 8.2 wins 14
Ubuntu 10.10 wins 84
Arch 03/2011 wins 196

But having 2 results very close to each other does not really means that 1 OS is significantly better than the others. To take this into account, I've introduced the tolerance principle. This means that every OSes having a result very close (x percent) to the best result will also win the test.

If we accept a tolerance of 5%, the results are:
Netbsd 5.1 wins 97
FreeBSD 8.2 wins 34
Ubuntu 10.10 wins 145
Arch 03/2011 wins 245

If we accept a tolerance of 10%, the results are:
Netbsd 5.1 wins 136
FreeBSD 8.2 wins 54
Ubuntu 10.10 wins 186
Arch 03/2011 wins 275



The detailed results are:
----------------------------------
(Livejournal does not allow me to post the full reports, because they are too long. I, thus, post a subset of them. PLease contact me (william _dot os4y at_ gmail dot_ com)if you want the full one by email).

With tolerance 0:
=============
test_abc
	Netbsd 5.1:	0.06109,	delta =   0.00%
	FreeBSD 8.2:	0.06670,	delta =   9.18%
	Ubuntu 10.10:	0.06814,	delta =  11.54%
	Arch 03/2011:	0.08548,	delta =  39.92%
winner is:  Netbsd 5.1

test_abstract_numbers
	Netbsd 5.1:	0.00120,	delta =   0.16%
	FreeBSD 8.2:	0.00256,	delta = 114.16%
	Ubuntu 10.10:	0.00140,	delta =  17.51%
	Arch 03/2011:	0.00119,	delta =   0.00%
winner is:  Arch 03/2011

test_aepack
	Netbsd 5.1:	NA
	FreeBSD 8.2:	NA
	Ubuntu 10.10:	NA
	Arch 03/2011:	NA
no winners

test_aifc
	Netbsd 5.1:	1.49062,	delta =  28.61%
	FreeBSD 8.2:	1.17668,	delta =   1.52%
	Ubuntu 10.10:	1.19231,	delta =   2.87%
	Arch 03/2011:	1.15902,	delta =   0.00%
winner is:  Arch 03/2011

test_al
	Netbsd 5.1:	NA
	FreeBSD 8.2:	NA
	Ubuntu 10.10:	NA
	Arch 03/2011:	NA
no winners

test_anydbm
	Netbsd 5.1:	0.00736,	delta =   0.00%
	FreeBSD 8.2:	0.01428,	delta =  94.00%
	Ubuntu 10.10:	1.17291,	delta = 15838.95%
	Arch 03/2011:	0.03939,	delta = 435.24%
winner is:  Netbsd 5.1

...


- With 5% tolerance:
================
test_abc
	Netbsd 5.1:	0.06109,	delta =   0.00%
	FreeBSD 8.2:	0.06670,	delta =   9.18%
	Ubuntu 10.10:	0.06814,	delta =  11.54%
	Arch 03/2011:	0.08548,	delta =  39.92%
winner is:  Netbsd 5.1

test_abstract_numbers
	Netbsd 5.1:	0.00120,	delta =   0.16%
	FreeBSD 8.2:	0.00256,	delta = 114.16%
	Ubuntu 10.10:	0.00140,	delta =  17.51%
	Arch 03/2011:	0.00119,	delta =   0.00%
winners are:  Netbsd 5.1, Arch 03/2011

test_aepack
	Netbsd 5.1:	NA
	FreeBSD 8.2:	NA
	Ubuntu 10.10:	NA
	Arch 03/2011:	NA
no winners

test_aifc
	Netbsd 5.1:	1.49062,	delta =  28.61%
	FreeBSD 8.2:	1.17668,	delta =   1.52%
	Ubuntu 10.10:	1.19231,	delta =   2.87%
	Arch 03/2011:	1.15902,	delta =   0.00%
winners are:  FreeBSD 8.2, Ubuntu 10.10, Arch 03/2011

test_al
	Netbsd 5.1:	NA
	FreeBSD 8.2:	NA
	Ubuntu 10.10:	NA
	Arch 03/2011:	NA
no winners

test_anydbm
	Netbsd 5.1:	0.00736,	delta =   0.00%
	FreeBSD 8.2:	0.01428,	delta =  94.00%
	Ubuntu 10.10:	1.17291,	delta = 15838.95%
	Arch 03/2011:	0.03939,	delta = 435.24%
winner is:  Netbsd 5.1

...



- With 10% tolrerance:
==================
test_abc
	Netbsd 5.1:	0.06109,	delta =   0.00%
	FreeBSD 8.2:	0.06670,	delta =   9.18%
	Ubuntu 10.10:	0.06814,	delta =  11.54%
	Arch 03/2011:	0.08548,	delta =  39.92%
winners are:  Netbsd 5.1, FreeBSD 8.2

test_abstract_numbers
	Netbsd 5.1:	0.00120,	delta =   0.16%
	FreeBSD 8.2:	0.00256,	delta = 114.16%
	Ubuntu 10.10:	0.00140,	delta =  17.51%
	Arch 03/2011:	0.00119,	delta =   0.00%
winners are:  Netbsd 5.1, Arch 03/2011

test_aepack
	Netbsd 5.1:	NA
	FreeBSD 8.2:	NA
	Ubuntu 10.10:	NA
	Arch 03/2011:	NA
no winners

test_aifc
	Netbsd 5.1:	1.49062,	delta =  28.61%
	FreeBSD 8.2:	1.17668,	delta =   1.52%
	Ubuntu 10.10:	1.19231,	delta =   2.87%
	Arch 03/2011:	1.15902,	delta =   0.00%
winners are:  FreeBSD 8.2, Ubuntu 10.10, Arch 03/2011

test_al
	Netbsd 5.1:	NA
	FreeBSD 8.2:	NA
	Ubuntu 10.10:	NA
	Arch 03/2011:	NA
no winners

test_anydbm
	Netbsd 5.1:	0.00736,	delta =   0.00%
	FreeBSD 8.2:	0.01428,	delta =  94.00%
	Ubuntu 10.10:	1.17291,	delta = 15838.95%
	Arch 03/2011:	0.03939,	delta = 435.24%
winner is:  Netbsd 5.1

...

Sat, Jan. 22nd, 2011, 06:17 pm
Release of FAPWS3-0.9

I'm really happy to announce a new release of Fapws3: 0.9.

This release contains several fixes and some interesting new features:
- Avoid a crash in case you forget the "return" command in your callback method
- a better django adaptor (look in the samples)
- a session object which allow you to associate a python object
(typically a dictionary) to a session ID
- a cookie parser (in base.py)
- better management of connections broken by the client
- a new multipart object allowing you to better manage uploads
- a new Html form generator: SimpleForm

Several contributions from lot of people. I would specifically thanks:
- Vincent for the contribution of SimpleForm and Session object
- Maxim for the Django adapter
- Shigin for the broken/slow connections
- Satori for the coding layout/organisation
- ... and several other persons.

I also use this email to present the last performance test's results
I've made to compare Fapws3 and Cherokee+uwsgi.
Both a really close, but Fapws3 eats much less memory. Fapws3 is
amongst the best :-)

Such memory impact was the key element for alarm system on which I've
contributed (funny project).

Lot of work since the last release, but lot of fun and very good collaborations.
Thanks all for that.



For the next release I would like to improve the algorithm of the file upload.
Currently Fapws3 load the whole file before giving back the hand to
the python callback and the multipart object.
To optimise memory foot print, it should directly use the selected multipart object.


william

github: https://github.com/william-os4y/fapws3/
website: http://www.fapws.org/
PyPI: http://pypi.python.org/pypi/fapws3/0.9.dev

Mon, Nov. 22nd, 2010, 10:30 pm

I'm happy to announce the release of Fapws-0.8.1.
This release fix several fixes: datetime: setup.py, ...

On the other hand I'm really happy to see more and more positive
feedbacks about Fapws ;-).

Just to point one, I would mention a public website:
www.hannut-chapter.be running Fapws-0.8 since 60 days today ;-)
Moreover, this site is directly connected to Internet (no proxy).
Since 60 days, no issues, no memory leaks !!!!
It runs on a very cheap shared server with few HW resources.



Visit us on www.fapws.org
download: https://github.com/william-os4y/fapws3/

William

Tue, Jul. 27th, 2010, 10:27 am
Release 0.6 of Fast Asynchronous Python Wsgi Server

This new Fapws3 release fixes several bugs and bring the new "timers" feature.
http://github.com/william-os4y/fapws3
http://pypi.python.org/pypi/fapws3/


As you can see in the sample, timers allow you to execute a recurrent tasks with a predefined frequency.
I'm using it in the sample to show you the performance impact it can has on a commit.
This is just a sample and I know that this will not fit every cases.

I'm currently working on a real "defer" where you can as Fapws3 to execute a tasks asynchronously from the rest of the application.
Current tests I'm doing are really promising, but wait and see ;-). Maybe for the next release.

I inform you that Fapws3 users' community is increasing and I frequently receive positive feedbacks, questions, ... .
Those questions are mainly coming from the lack of documentation regarding Fapws3.
Thus, I would request help from you for 2 main items.

People interested to provide some help have to contact me by email.


1. Documentation
------------------------------
To allow those people to better step-in Fapws3, I request help from you to improve the documentation of Fapws3.
This can be generic docs, howtos, trips, ...

To be pragmatic, send me your texts (html format) in text files, I'll add them on the website.
If the requests are too frequents, I will check to implement a wiki (but this is for later).


2. Tests script
------------------------
I have the idea to implement a script that will perform some tests and send results on our website (www.fapws.org). Every people interested could be execute the tests and share the results with us, on our website.

The idea is to have a system like some linux distributions are doing: collecting some specific informations and share them by sending it to a webserver.
I plan to send them via an http POST request.

Basically, I'm thinking about a script that will use the "ApacheBenchamrk" tool and will test some basic features of Fapws3: return a list object, return an iterable object and return a file (you have then in the famous hello_world.py sample).
Thus the script will return data like " Req/sec, Concurrency Level, Time taken for tests, Complete requests, Failed requests:, Write errors:, Requests per second, Transfer rate"
More over the script must collect some critical info about the context of the machine: cpu model, libev_flag, kernel (uname -a), fapws3 version, libev version, python version.

This will gives an idea of the performance on different systems.
This will show on which system Fapws3 can runs.
...


To provide some ideas, I'm thinking about something like:

LOG="bench_`date +%Y%m%d%H%M%S`.log"
nice -n 20 ab -n100000 -c10 http://127.0.0.1:8080/hello >> $LOG
nice -n 20 ab -n50000 -c10 http://127.0.0.1:8080/long >> $LOG
nice -n 20 ab -n50000 -c10 http://127.0.0.1:8080/iteration >> $LOG
#Parse $LOG
#Collect machine data
#send them to www.fapws.org


The server script will be something like:

# -*- coding: utf-8 -*-

import fapws._evwsgi as evwsgi
from fapws import base
from fapws.contrib import views
import os
import platform


def hello(environ, start_response):
    global switch
    start_response('200 OK', [('Content-Type','text/html')])
    switch=1
    evwsgi.trigger_idle()
    return ["hello world!!"]

staticlong=views.Staticfile("long.txt") #this is the long test residing in hello_world's sample

def iteration(environ, start_response):
    global switch
    start_response('200 OK', [('Content-Type','text/html')])
    yield "hello"
    yield " "
    yield "world!!"

def getenv(environ, start_response):
    start_response('200 OK', [('Content-Type','text/html')])
    env={}
    env['LIBEV_FLAGS']=os.environ.get('LIBEV_FLAGS','') #best would be to catch the default too
    env['libev']=evwsgi.libev_version()
    env['python']=platform.python_version()
    env['uname']=platform.uname()
    #other parameters still to implement
    return [str(env)]
   

def start():
    evwsgi.start("0.0.0.0", "8080")
    evwsgi.set_base_module(base)
   
    evwsgi.wsgi_cb(("/hello", hello))
    evwsgi.wsgi_cb(("/iterhello", iteration))
    evwsgi.wsgi_cb(("/long", staticlong))
    evwsgi.wsgi_cb(("/getenv", getenv))

    evwsgi.set_debug(0)   
    evwsgi.run()
   

if __name__=="__main__":
    start()


For sure, on the server side, a Fapws3 script must be written to collect all the feedbacks, store then in a small DB (preferable sqlite) and also present them into nice tables, and, maybe, some charts.

W.

Tue, May. 25th, 2010, 06:12 pm
Release 0.5 of Fast Asynchronous Python Wsgi Server

I'm pleased to announce the release of Fapws3-0.5.
This release fix several big fixes mainly for the iterator objects.

Please note that, to better match the wsgi recommendations, the method "evwsgi.start" requires now 2 strings.
Thanks to adapt your existing Fapws server accordingly.

I would also report you some user's feedbacks:
- Fapws, and thus libev, have been compiled on AIX 5.3
- Fapws is serving (without any frontend like pound, nginx, lighttd, ... ) a django website since +40 days without any interruptions. This website has +- 40 registred users logging and logout every days; and about 200 anonymous visitors per day.


Amongst others, thanks to Marc and Tamas for their contributions

For further details, I recommend you our website: http://www.fapws.org and our github page: http://github.com/william-os4y/fapws3

Download: http://github.com/william-os4y/fapws3/downloads



Have fun


William

Thu, Nov. 5th, 2009, 04:20 pm
Non blocking connections

By testing my Fapws3 webserver on different type of systems, I've discovered a strange behaviour of the Linux kernels.

Indeed on Linux, despite my dfferent tests, I've never had the "EAGAIN" error. On the opposite, on OpenBSD 4.6 I receive a lot of those errors during the write process.

OpenBSD is reporting that this error pop's-up because the resource is not available. Within a non-blocking context this sounds logic. Indeed, the resource can still be busy with the previous write when we try to send the new one.

Now the valid question is why we don't have such behaviour on the linux kernel ?

I must deeper investigate, but if someone has a explanation, I'm interested.


William
http://github.com/william-os4y/


ps:
I've used OpenBSD-4.6
Linux-2.6.31 from Archlinux and Ubuntu

Fri, Jul. 17th, 2009, 09:56 am
Release of Fapws3-0.3 (Fast asynchronous python wsgi server)

I'm happy to announce the release of Fapws3-0.3.

This release does not bring new features, but fix several bugs.

Let's have fun with that peace of code ;-)

You can get it from my GitHub repository: http://github.com/william-os4y/fapws3/
Or directly via the following link: http://github.com/william-os4y/fapws3/tarball/v0.3.1

For discussions, ideas, ... feel free to join the mailinglist: http://groups.google.com/group/fapws

I've tested it with Python 2.4, 2.5, 2.6 with libev-3.6 on linux and freebsd machines (with and without pound).

I'm using it for production websites native or with Django. Contributions to have Fapws running with other wsgi framework are welcome.


William

Wed, Feb. 25th, 2009, 07:53 pm
FAPWS-0.2 (WSGI server based on libev)

I'm really happy to announce the release of FAPWS3-0.2 a WSGI webserver based on libev.

This release include several bugfixes.

You can got it on my github website: http://github.com/william-os4y/fapws3/tarball/v0.2

Most importantly, with this release FAPWS becomes much more stable and useful.

I've tested it with many different type of configurations and it has always resisted to my differents Stress tests (with ApacheBenchmark tool):
- Django webpage with a complex (and ugly) sql command with 300 concurrent requests
- Simple Django webpage with 300 concurrents requets
- for a simple Jpg files I've got 3524#/sec.


Thanks to give it a trial.


William


ANNEXES:
========

Tests with 300 concurrent requests.


Heavy Django page:
------------------
Server Software:        fapws3/0.2
Server Hostname:        127.0.0.1 
Server Port:            8084      

Document Path:          /acts/2009/
Document Length:        24554 bytes     

Concurrency Level:      300
Time taken for tests:   166.334 seconds
Complete requests:      1000            
Failed requests:        0              
Write errors:           0              
Total transferred:      19772014 bytes 
HTML transferred:       19667754 bytes 
Requests per second:    4.81 [#/sec] (mean)
Time per request:       62375.084 [ms] (mean)
Time per request:       207.917 [ms] (mean, across all concurrent requests)
Transfer rate:          116.08 [Kbytes/sec] received                       


Much more simple Django page
-----------------------------
Server Software:        fapws3/0.2
Server Hostname:        127.0.0.1 
Server Port:            8084      

Document Path:          /membres/Off/
Document Length:        4918 bytes

Concurrency Level:      300
Time taken for tests:   23.178 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      5290304 bytes
HTML transferred:       5154064 bytes
Requests per second:    43.14 [#/sec] (mean)
Time per request:       6953.497 [ms] (mean)
Time per request:       23.178 [ms] (mean, across all concurrent requests)
Transfer rate:          222.89 [Kbytes/sec] received


Simple jpg file
----------------
Server Software:        fapws3/0.2
Server Hostname:        127.0.0.1 
Server Port:            8084      

Document Path:          /static/images/img04.jpg
Document Length:        13974 bytes

Concurrency Level:      300
Time taken for tests:   28.369 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      1416300000 bytes
HTML transferred:       1397400000 bytes
Requests per second:    3524.99 [#/sec] (mean)
Time per request:       85.107 [ms] (mean)
Time per request:       0.284 [ms] (mean, across all concurrent requests)
Transfer rate:          48754.28 [Kbytes/sec] received

10 most recent