Hi all,
I think this discussion got a little bit out of hand.
I probably shouldn't have posted those numbers.
I would like to get this thread back on track.
But after that I shortly want to answer the questions from Gregor's  
last mail and append the relevant sctions of my code.
so here we go "mutable or immutable":
So if I may try to summarize:
mutable:
========
* Stuart Axon: mainly consistency with rects
* René Dudfield: personally uses list more often than tuples
* Casey Duncan: consistency with rects and performance concerns
immutable:
==========
* Brian Fisher: immutable prevent subtle bugs
* Marcus von Appen: no reason given
* Gregor Lingl: should behave more like numbers than like list
* Lorenz Quack: personally thinks the presented argument for  
immutable are stronger
So would anyone have strong objections if we go with immutable?
Vectors would then behave more like a mix of floats and tuples :)
what follows is the response to the Gregor:
Gregor Lingl wrote:
Lorenz Quack schrieb:
>>> a = 2
>>> a += a
I believe the interpreter internally takes the two operands (in  
this case a and a)  adds them and then rebinds the result to a  
(the id changes) so effectively doing
>>> a = a + a
exactly because a is immutable
if a were mutable the two expressions would be indeed different  
the += version would not create a new instance and rebind the name  
a to it but modify the object a is referring to, while a = a + a  
would again create a new object and rebind it.
So therefore I believe that this test does make sense.
Tell me if I'm wrong somewhere.
here are the calls with the results:
[snip]
which has more than 30000 digits. Which result did you get after  
10000000 executions of the statement x = x + x?
And which implementation of the long integer type did you use  
that is that much faster than Python's ?
Regards,
Gregor
indeed those are valid objections. well first of all I used a self- 
written C extension with double as the underlying type. but the  
result after 1023 iterations turns into (inf, inf). this could of  
course invalidate the results so I modified the test:
>>> timeit.repeat("x = Vector2d(2,3); x += x", "from vector import  
Vector2d", repeat=5, number=10000000)
[5.1832518577575684,
5.1106431484222412,
5.1510121822357178,
5.0923140048980713,
5.0608019828796387]
>>> timeit.repeat("x = Vector2d(2,3); x = x + x", "from vector  
import Vector2d", repeat=5, number=10000000)
[6.5348029136657715,
6.3499071598052979,
6.4433431625366211,
6.412431001663208,
6.4398849010467529]
>>> timeit.repeat("x = Vector2d(2,3)", "from vector import  
Vector2d", repeat=5, number=10000000)
[3.7264928817749023,
3.6346859931945801,
3.6241021156311035,
3.7733709812164307,
3.6264529228210449]
Did you use two different Vector2d classes here, one mutable and  
one immutable? Why do they
have the same name then? Or did you merely implement the operations  
x+=x and x=x+x differently?
The latter. Same class just once use the "nb_add" callback (or from  
a python persprctive: "__add__") and once the "nb_inplace_add"  
callback (or again from python: "__iadd__")
If x = x + y creates a new object x or changes x is also a matter  
of how it is implemented.
not really. when the "nb_add" C callback (or "__add__" for that  
matter) is called you have no way of knowing what the caller is  
going to do with the result. so from inside that callback you cannot  
distinguish between
>>> x = x + y
and
>>> z = x + y
so you really don't have a choice but to return a new object.
Moreover it is my conviction  that one must not decide  about   
which data type to use on
the basis of a +- 50 percent difference in performance.
ok.
One more remark: At least on module of the standard library of  
Python has a (rather simple)
2d-Vector class implemented in pure Python, which of course has a  
considerably worse performance,
by a factor of 4 approximately:
>>> timeit.repeat("x = Vec2D(2,3); x = x + x", "from turtle import  
Vec2D", repeat=1, number=10000000)
[25.274672320512536]
Nevertheless one would expect a class implemented in C to run  
*much* faster than a pure Python solution.
So I suspect that your implementation may not be sufficiently  
significant to serve as a criterion to
decide that issue.
Best regards,
Gregor
.
so here comes the boild down version of my code:
#define PyVector2d_Check(v)  PyObject_TypeCheck(v, &PyVector2d_Type)
#define PyVector3d_Check(v)  PyObject_TypeCheck(v, &PyVector3d_Type)
#define PyVector4d_Check(v)  PyObject_TypeCheck(v, &PyVector4d_Type)
#define PyVectorNd_Check(v)  (PyVector4d_Check(v) ||  
PyVector3d_Check(v) || PyVector2d_Check(v))
static PyObject *
PyVectorNd_add(PyObject *o1, PyObject *o2)
{
   int i;
   if (PyVectorNd_Check(o1)) {
       int dim = ((PyVectorNd*)o1)->dim;
       if (checkPyVectorNdCompatible(o2, dim)) {
           PyVectorNd *ret = (PyVectorNd*)PyVectorNd_NEW(dim);
           for (i = 0; i < dim; i++) {
               ret->data[i] = ((PyVectorNd*)o1)->data[i] +  
PySequence_GetItem_AsDouble(o2, i);
           }
           return (PyObject*)ret;
       }
   }
   else {
       int dim = ((PyVectorNd*)o2)->dim;
       if (checkPyVectorNdCompatible(o1, dim)) {
           PyVectorNd *ret = (PyVectorNd*)PyVectorNd_NEW(dim);
           for (i = 0; i < dim; i++) {
               ret->data[i] = PySequence_GetItem_AsDouble(o1, i) +  
((PyVectorNd*)o2)->data[i];
           }
           return (PyObject*)ret;
       }
   }
   Py_INCREF(Py_NotImplemented);
   return Py_NotImplemented;
}
static PyObject *
PyVectorNd_inplace_add(PyVectorNd *self, PyObject *other)
{
   int i;
   if (checkPyVectorNdCompatible(other, self->dim)) {
       for (i = 0; i < self->dim; i++) {
           self->data[i] += PySequence_GetItem_AsDouble(other, i);
       }
       Py_INCREF(self);
       return (PyObject*)self;
   }
   Py_INCREF(Py_NotImplemented);
   return Py_NotImplemented;
}
static PyObject *
PyVectorNd_NEW(int dim)
{
   PyVectorNd *object;
   switch (dim) {
   case 2:
       object = PyObject_New(PyVectorNd, &PyVector2d_Type);
       break;
   case 3:
       object = PyObject_New(PyVectorNd, &PyVector3d_Type);
       break;
   case 4:
       object = PyObject_New(PyVectorNd, &PyVector4d_Type);
       break;
   default:
       fprintf(stderr, "Error: wrong internal call to PyVectorNd_NEW. 
\n");
       exit(1);
   }
   if (object != NULL) {
       object->dim = dim;
       object->epsilon = FLT_EPSILON;
       object->data = PyMem_New(double, dim);
       if (object->data == NULL) {
           return PyErr_NoMemory();
       }
   }
   else {
       fprintf(stderr, "FAILURE: could not create new PyVectorNd  
object!\n");
   }
   return (PyObject *)object;
}
Note that the main difference between the two (PyVectorNd_add and  
PyVectorNd_inplace_add) in this case is mainly a call to  
PyVectorNd_Check and PyVectorNd_NEW.
And again: I'm not really here to discuss this particular code or  
look for optimizations.
regards,
//Lorenz