[Python-Dev] Pervasive socket failures on Windows

Tim Peters tim.peters at gmail.com
Sun Feb 12 04:35:35 CET 2006


[Tim]
>> The code in selectmodule when _MSC_VER is _not_ defined complains if a
>> socket fd is >= FD_SETSIZE _or_ is < 0.  But the new code in
>> socketmodule on non-Windows boxes is happy with negative fds, saying
>> "fine" whenever fd < FD_SETSIZE.  Is that right or wrong?

[Martin]
> I think it is right: the code just "knows" that negative values
> cannot happen. The socket handles originate from system calls
> (socket(2), accept(2)), and a negative value returned there is
> an error. However, the system might (and did) return handles
> larger than FD_SETSIZE (as the kernel often won't know what
> value FD_SETSIZE has).

Since the new code was just added, you can remember that now.  No
comments record the reasoning, though, and over time it's likely to
become another mass of micro-optimized "mystery code".  If it's true
that negative values can't happen (and I believe that), then it
doesn't hurt to verify that they're >= 0 either (except from a
micro-efficiency view), and it would simplify the code do to so.

>> "The answer" isn't so important to me as that this kind of crap always
>> happens when platform-specific logic ends up getting defined in
>> multiple modules.  Much better to define macros to hide this junk,
>> exactly once; pyport.h is the natural place for it.

> That must be done carefully, though. For example, how should
> the line
>
>                 max = 0;                     /* not used for Win32 */
>
> be treated? Should we introduce a
> #define Py_SELECT_NUMBER_OF_FDS_PARAMETER_IS_IRRELEVANT?

I wouldn't:  I'd simply throw away the current confusing avoidance of
computing "max" on Windows.  That's another case where
platform-specific micro-efficiency seems the only justification
(select() on Windows ignores its first argument; there's nothing
special about "0" here, despite that the code currently makes 0 _look_
special on Windows somehow).

So fine by me if the current:

#if defined(_MSC_VER)
		max = 0;		     /* not used for Win32 */
#else  /* !_MSC_VER */
		if (v < 0 || v >= FD_SETSIZE) {
			PyErr_SetString(PyExc_ValueError,
				    "filedescriptor out of range in select()");
			goto finally;
		}
		if (v > max)
			max = v;
#endif /* _MSC_VER */

block got replaced by, e.g.,:

		max = 0;
		if (! Py_IS_SOCKET_FD_OK(v)) {
			PyErr_SetString(PyExc_ValueError,
				    "filedescriptor out of range in select()");
			goto finally;
		}
		if (v > max)
			max = v;

Unlike the current code, that would, for example, also allow for the
_possibility_ of checking that v != INVALID_SOCKET on Windows, by
fiddling the Windows expansion of Py_IS_SOCKET_FD_OK (and of course
all users of that macro would grow the same new smarts).

I'm not really a macro fan:  I'm a fan of centralizing portability
hacks in config header files, and hiding them under abstractions.  C
macros are usually strong enough to support this, and are all the
concession to micro-efficiency I'm eager ;-) to make.


More information about the Python-Dev mailing list