Bernie Hackett, Luke Lovett, Anna Herlihy, and I are pleased to announce PyMongo 3.0.3. This release fixes bugs reported since PyMongo 3.0.2—most importantly, a bug that broke Kerberos authentication. We also fixed a TypeError if you try to turn off SSL hostname validation using an option in the MongoDB connection string, and an infinite loop reading certain kinds of corrupt GridFS files.
For the full list of bugs fixed in PyMongo 3.0.3, please see the release in Jira.
If you use PyMongo 3.0.x, upgrade.
If you are on PyMongo 2.8.x, you should probably wait to upgrade: we are about to make it easier for you. PyMongo 2.9, which will be released shortly, provides a smooth bridge for you to upgrade from the old API to the new one.
Let us know if you have any problems by opening a ticket in Jira, in the PYTHON project.
MongoDB 3.1.5 has been released. As a reminder, 3.1.5 is a development release and is not intended for production use. The 3.1 series will evolve into 3.2, which will be for production.
New/fixed in this release:
As always, please let us know of any issues.
– The MongoDB Team
I'm releasing libmongoc with an urgent bugfix for a common crash in 1.1.8, which itself was introduced while I was fixing a rare crash in 1.1.7. For further details:
In the process of validating my latest fix I expanded test coverage, and noticed that
./configure --enable-coverage didn't work. That is now fixed in libbson and libmongoc.
libbson 1.1.9 can be downloaded here:
libmongoc 1.1.9 can be downloaded here:
Introducing the new MongoDB Cloud Manager! Now when you create a new group in MongoDB Cloud Manager (formerly MMS), you immediately enter a 30 day free trial. All the great features of Cloud Manager are enabled during this period. At the conclusion of the 30 day free trial, you will have the option to choose between the Standard Plan or the Free Tier Plan. A note about the differences in the plans:
Free Tier Plan
If you decide to pick the Standard plan before the free 30 day trial is over, you will still get the remaining days of your trial for free.
Are there any changes to backup plans?
I’m already running MMS Basic or MMS Classic, what happens to my group?
What are data bearing servers?
If I decide to stop using Automation, how do I unmanage my group from MMS?
Can I still choose the MMS Basic plan to get 8 free servers?
I released libbson and libmongoc 1.1.8 today. The significant change is the defeat of a stubborn crash reported weeks ago. Very rarely, when a
mongoc_client_t is connected to a replica set while a member is added, and authentication fails, it leaves the client's data structures in an inconsistent state that makes it seg fault later in
I had already gone one round with this bug and given up: I released 1.1.7 with extra checking and logging along this code path, but without a theory about the cause of the crash, much less a fix. The customer who reported the crash could reproduce it a couple times in each of their days-long durability tests, so they sent me core dumps. My colleague Spencer Jackson devoted heroic effort to understanding the core dumps (including one with no debug symbols!), and we finally discovered the sequence that leads to the crash.
The bug was in
_mongoc_cluster_reconnect_replica_set(), which has two loops. The first loop tries nodes until it finds a replica set primary. In the second loop, it iterates over the primary's peer list connecting and authenticating with each peer, including the primary itself.
The crash comes when we:
nodes_lenis set to 2 and the nodes list is reallocated, but the second node's struct is uninitialized.
nodes_lenis 2 but the second node is still uninitialized!
mongoc_client_destroyiterates the nodes list, destroying them.
nodes_lenis 2, the client tries to destroy the second, uninitialized node.
streamfield in the second node happens to be non-NULL, the client calls
stream->closeon it and segfaults.
This was particularly hard for the customer's test to reproduce, because the driver has to connect while the test framework is reconfiguring auth in the replica set, and the buffer reallocation has to return a non-zero chunk of memory.
The fix is to properly manage
nodes_len: don't increment it to N unless N nodes have actually been initialized.
Additionally, zero-out all nodes right after reallocating the nodes list to ensure all data structures are NULL.
It's satisfying to nail this bug after a long chase, but also painful: that code path is long gone in the 1.2.0 branch, replaced by Samantha Ritter's implementation of the Server Discovery And Monitoring spec. If I could've released 1.2.0 by now we'd have saved all the trouble of debugging the old code. It only redoubles my drive to release a beta of the new driver this quarter and get out of this bind.