Sunday, June 30, 2013

Vasudev Ram: Python, meet Turtle

By Vasudev Ram






from time import sleep
import turtle as t

def waitforever():
while True:
sleep(10)

def main():
x = 15
while True:
t.forward(x)
t.right(90)
x = x + 10
if x > 500:
break

waitforever()

main()






Python Turtle graphics



- Vasudev Ram - Dancing Bison Enterprises



Contact me









via Planet Python http://jugad2.blogspot.com/2013/07/python-meet-turtle.html

Saturday, June 29, 2013

Dusty Phillips: Creating an Application in Kivy: Part 3

This article continues a series on creating a Kivy application. At the end of Part 2, we had a basic form for entering Jabber login details. In this part, we’ll hook up the login button to actually connect to the jabber server and download a Buddy List.


Make sure you’ve completed the previous two parts, or if you want to start from a clean slate, you can clone the end_part_two tag in my git repo for this project. Remember to do your own changes on a separate branch to avoid conflicts later.



git clone https://github.com/buchuki/orkiv.git # or 'git pull' if you've cloned before and want to update
git checkout end_part_two
git checkout -b working_changes
source venv/bin/activate

Table Of Contents


Here are links to other parts of this tutorial.



  1. Part 1: Introduction to Kivy

  2. Part 2: A basic KV Language interface

  3. Part 3: Interacting with widgets and the user


Handling Events


Dictionary.com defines an event as “something that happens, esp. something important”. That’s a perfect description of events in Kivy. Kivy is firing events all the time, but we only have to pay attention to those that we consider important (in the upcoming example, clicking/touching the Login button is important). Every graphical toolkit I have ever used has some concept of events. The difference between Kivy and those other toolkits is that in Kivy, event dispatch and handling is sane and uncomplicated.


If an event is something that happens, then an event handler is something that reacts when an event occurs. Our event handler will eventually log into the server and display the Buddy List, but lets start by printing a quote from a humorous scene in the film Rush Hour. We do this by adding a method to the AccountDetailsForm class in __main__.py:




class AccountDetailsForm(AnchorLayout):
def login(self):
print("Click the goddamn button")

Remember in Part 1 I mentioned that object oriented programming allowed us to add functionality to existing classes? That’s what we are doing here. The AnchorLayout class has no idea what it might mean to “login” to something, but it makes sense for AccountDetailsForm to do so. So we add a method to the subclass named login. A method is just a function that is aware of which object it is attached to (that’s why it has the parameter self). At this point, we don’t care what object we are attached to, but it will become important soon.


So that method, which we named login, is an event handler. Actually, technically, it’s just a method that happens to handle an event, and we will hook it up by calling that method from the true event handler. Let’s do that next. We just need to add one property to the login Button in our orkiv.kv file:




Button:
size_hint_y: None
height: "40dp"
text: "Login"
on_press: root.login()

Kivy is really really good about making programmers do the minimal amount of work required. If you add this one line of code and run it, you can click the Login button repeatedly, and each time, the movie quote will show up on the terminal.


The Button widget in Kivy has a press event. The key to signifying an event handler is to add an on_<eventname> property. Now this is where the KV Language gets a little creepy. The code that comes after the event handler property name is standard Python. In fact, you could just use on_press: print("you clicked it") and it would work. Or if you wanted multiple lines of Python, you could put them on the next line, indented, as though the program code was a child widget.


Don’t ever do that. That’s my rule: never have two or more lines of python logic in a KV Language file. When you break this rule, make sure you know for certain that you have a good reason to do it and that you thought that reason all the way through. This will aide you in the future if you ever want to change the layout without trying to sift out the business logic that hasn’t changed. In my mind, the point of KV Language is to define layout and style. You should not mix logic (program code) with the styling. Instead, put the program code on the class in __main__.py and use one line of logic to call that method.


There’s a small amount of magic going on here. The root variable in a KV Language file always refers to the object that is at the left-most indent in our current block. In this case, it’s the <AccountDetailsForm> class, which is the class containing the handy login method. KV Language has two other “magic” variables that you may find useful: app and self. app can be used anywhere in a KV Language file to refer to the object that inherits from Application; so the Orkiv object. Self refers to the current widget. If we called self.x() in the on_press handler, it would refer to the Login Button object itself.


Interacting with widgets


Now, let’s figure out how to get some values out of the textboxes in the form. To this end, we’ll need to refer to objects in the KV Language file from inside the associated class in our __main__.py. This requires a few sets of changes to both files. Let’s start with some minor additions to the elements of our form in orkiv.kv.




Label:
text: "Server"
TextInput:
id: server_input
Label:
text: "Username"
TextInput:
id: username_input
Label:
text: "Password"
TextInput:
password: True
id: password_input

We’ve simply added an identifier to each of our TextInput boxes so that we can refer to them elsewhere in the code. Note that I gave them all consistent names ending in _input. Consistency is very important in programming. Your memory is not infallible, and it would be silly to overtax it by trying to remember that one box is named password and the other is named UserNameTextInput. After years of programming, this statement seems blatantly obvious to me. However, I am frequently frustrated by non-developers who supply (for example) csv files that use spaces and underscores interchangeably or vary the capitalization of identifiers that I need to sanitize before using in program code. Do yourself and everyone else who works with your code a favor and strive for consistency. One really good way to do this in Python code is to strictly follow the Python Style Guidelines even if you disagree with some of the rules. I strongly recommend using a pep-8 highlighter in your editor (I personally use SublimeLinter) to aid you in this pursuit.


Ok, back to the code! The key takeaway is that by giving these widgets an id, we can reference them in other parts of the KV Language file. Specifically, we can set them as the values of custom properties on the AccountDetailsForm class rule like so:




<AccountDetailsForm>:
anchor_y: "top"
server_box: server_input
username_box: username_input
password_box: password_input
BoxLayout:

We have defined three custom Kivy properties (named server_box, username_box, and password_box) on the AccountDetailsForm class. Each of these properties is assigned a value of the id of one of the text boxes.


It is common to name the properties the same as the ids on the boxes they point to (see my diatribe on consistency above!). However, I chose not to do that here for pedagogical reasons; it’s clear from this example that a property name is a completely different thing from an identifier.


This is no different from setting a property that is meaningful to the parent class such as anchor_y.The difference is that the parent class (AnchorLayout) doesn’t know about them. Neither does the child class, yet, but we’ll now fix that in __main__.py:




from kivy.properties import ObjectProperty # at top of file

class AccountDetailsForm(AnchorLayout):
server_box = ObjectProperty()
username_box = ObjectProperty()
password_box = ObjectProperty()

We added an import statement so we can access Kivy’s ObjectProperty class. Then we specified three instances of this fancy property on the AccountDetailsForm. These instances must have the same names as the properties defined in the KV Language file, since they are referring to the same thing. When the class is constructed, these three properties are defined as having values of None. However, as part of the KV Language parsing process, Kivy applies the styling found in orkiv.kv and overwrites those None values with the three values we specified in the previous example.


So we have widgets that are named by ids in our KV Language file that are being assigned to properties by the same KV Language file that are defined in the __main__.py file. Got that? The upshot of this slightly convoluted pipeline is that we can now access those values inside the login method on our class:




def login(self):
print(self.server_box.text)
print(self.username_box.text)
print(self.password_box.text)

If you run the code now, you should see the contents of the login form boxes displayed on the console when you click the Login button.


Choosing client libraries


Let’s take a break from programming and talk about something that has been overlooked in every programming textbook and tutorial I have ever read: How the hell do you figure out how to do what you need to do?


The short answer, as with every question asked in the last decade, is “Search Google!” (or rather, a search engine that doesn’t spy on you, such as Startpage.com or Duck Duck Go). That always brings out the more salient question, “what do I search for?”.


If you’ve recently started learning to program, you might be under the impression that programming is mostly about connecting loops, conditionals and data structures into some meaningful representation of an algorithm or design pattern. That’s certainly a major part of the job, but often, programming involves reusing existing code and connecting it. Software development is often more like building a lego set where you get to use a variety of prefabricated bricks that do almost exactly what you need, and then using contact cement to glue on some custom-built parts that don’t come with the lego set and can’t be found on store shelves.


Anyway, at this point we have a login form that we can enter details in and do something when the button is clicked. But we aren’t doing what we want to do, which is to display a buddy list for the given XMPP account. How can we do that?


Well, one option might be to read through the Jabber Protocol Specification and implement all the Jabber commands one by one. But that sounds like a lot of work. The absolutely most important attribute in a good programmer is laziness. This means spending hours to automate boring things so you never have to do them again. It also means spending extra time now to write your code in such a way that you or a coworker can reuse it in the future. Finally, it means taking advantage of other people’s work so you don’t have to redo it.


Most common tasks have been collected into what are called “libraries”. Kivy itself is a sort of library, a library for creating kick-ass graphical interfaces. Often when faced with a “how do I do this?” question, the next question should be “Are there any existing client libraries that I can use?” Imagine if you had to manually set the value of each pixel on the screen in order to render a button or textbox, and then process individual keypresses directly to find out what was in a text input box!


Before I started writing this tutorial, I wanted to make sure there was a client library I could use to connect to Jabber. I did a web search and found several options, best summarized by this question on Stack Overflow. Stack overflow is a great place to search for answers; if you can’t find what you are looking for, you can even ask the question yourself!


If a web search yields a nice collection of client libraries that might solve your problem, the next question is, “which one do I use?” The answer is to visit the home pages for each of the libraries and ask yourself questions like these:



  • Is the license compatible with the license I want to use for my application?

  • (If not open source) How much does it cost to license the library?

  • (If open source) Does the source code for the library look well maintained and easy to read?

  • Does the library appear to be actively developed and has there been a recent release?

  • Is there an active user community talking about the library?

  • Does the library appear to have useful (readable, up-to-date, complete) documentation?

  • What avenues of support do I have if I have trouble?

  • (If Python) Does it boast Python 3 support?


After some amount of research, I ended up choosing Sleek XMPP as it seems to have a modern, usable application programmer interface, reasonable source code and documentation, and a permissive license.


At this point, you’ll want to install SleekXMPP and it’s one dependency. Make sure the virtualenv is active and then issue the following command:



pip install sleekxmpp dnspython

Figuring out how to use a client library


The next step is making yourself familiar with the client library so you know what functions and methods to call to create the desired behavior. This depends entirely on the documentation for the library. I personally like to start coding as soon as possible and see what breaks. Then I refer to the documentation and figure out why it broke and evaluate how it needs to change. In the case of SleekXMPP I started exploring the documentation until I could answer the question, “What is the simplest possible command-line application I can use to retrieve a buddy list?”. This is it:




import sleekxmpp, sys

xmpp = sleekxmpp.ClientXMPP(sys.argv[1], sys.argv[2])

xmpp.connect()
xmpp.process()
xmpp.send_presence()
xmpp.get_roster()
print(xmpp.client_roster.keys())
xmpp.disconnect()

This script isn’t part of orkiv, it’s just some playing around I did before trying to integrate this collection of method calls into Orkiv. It took me about 20 minutes to come up with these few lines of code, mostly because the SleekXMPP documentation is a little rougher around the edges than I anticipated. I first implemented a couple of the tutorials in the Quickstart and got them working. However, the examples seem to use a verbose inheritance structure that seems quite unnecessary to me. So I tried to extract the guts into the procedural example you see above.


I do my testing inside a running Python shell — just type python and you can start entering python commands right into the interpreter. I personally use IPython as much as possible, because its shell is so much nicer to use than the standard one. At first, my call to get_roster() was returning an error; this was because I hadn’t noticed, in dissecting the inheritance structure, that I had to call connect() first! However, after my trial and error phase were completed, I came up with the above code, which I saved as sleekxmpp_buddylist.py and tested using python orkiv/sleekxmpp_buddylist.py jabberid@jabber.org password. I tested it with a throw-away Google account and my own personal jabber server. Works like a charm!


The script itself first imports a couple modules and creates a ClientXMPP object, passing in the username (in the form “username@server.tld”) and passwords from the command line. Then it calls several methods on this object to do some jabbery stuff. Then I output the value of the roster (“buddy list”), and disconnect.


Printing a buddy list on login button click


We could just copy the code above into the login() method and modify it to use the jabber id and password we got from the form. However, that would be rather short-sighted. We know we’re going to need to keep this ClientXMPP object around longer than a single button click, and indeed, longer than the lifetime of the login form. So instead, lets add a method to the Orkiv application object itself:




from sleekxmpp import ClientXMPP # at the top of the file

class Orkiv(App):
def connect_to_jabber(self, jabber_id, password):
self.xmpp = sleekxmpp.ClientXMPP(jabber_id, password)
self.xmpp.connect()
self.xmpp.process()
self.xmpp.send_presence()
self.xmpp.get_roster()

That’s a pretty simple method that creates an xmpp attribute on the app class and runs the boilerplate code for connecting to jabber, as we discovered in the script above. Of course, this method won’t actually do anything if we don’t call it from somewhere. The obvious place to do that, of course is when we click the login button. The correct thing to do will be to connect to jabber and then render a buddy list in the window, but we’re running out of space, so let’s just print it to the console:




def login(self):
jabber_id = self.username_box.text + "@" + self.server_box.text
password = self.password_box.text

app = Orkiv.get_running_app()
app.connect_to_jabber(jabber_id, password)
print(app.xmpp.client_roster.keys())
app.xmpp.disconnect()

Once again, the method is quite simple. We first grab the jabber id and password from the form box, being careful to concatenate the username and server to generate the jabber id. The next line, where we get the running app may require some explanation. We are calling a function on the Orkiv class itself. Remember, a class describes an object, so we aren’t talking to an instance of that object here, just to the class. When a method is meant to be called on a class instead of an instance of that class, it is called a static method. In this case, the static method happens to return an instance of the class: the currently running app. Normally, there is only ever one running app in a Kivy program. In fact, you’d have to be both foolish and audacious to get more than one.


The other slightly tricky thing about this line is that we have not defined any get_running_app method on the Orkiv class. In fact, we only have one method, connect_to_jabber. But remember that Orkiv inherits functionality from the App, and indeed, App has a get_running_app static method. That’s what we’re calling here.


Once we have access to the active app object, it’s easy to print the roster and disconnect. Obviously, it wouldn’t make sense to disconnect immediately after login in a completed jabber client. However, if you don’t disconnect and test this, the program won’t stop running when you close the window; it leaves the xmpp object hooked up in the background trying to do the kind of stuff that jabber clients do. When I tried it, I had to force kill the process with kill -9.


It is common, when coding, to do temporary things like this to test the program in its current state. We know we’ll remove that line in part 4, but… well, truth be told, part 4 hasn’t been written yet! In part 4, we’ll deal with some annoyances with this form, like the fact that we can’t tab between input fields, that we have no idea what happens if we provide incorrect login details, and that pesky problem of the jabber staying alive after we closed the window.


Monetary feedback


If you liked the tutorial and would like to see similar work published in the future, please support me. I hope to one day reduce my working hours at my day job to have more time to devote to open source development and technical writing. If you think this article was valuable enough that you would have paid for it, please consider supporting me in one or more of the following ways:



I’d particularly like to advertise Gittip, not for my own financial gain, but for everyone. I think a world where people are able to use gittip for their primary source of income is a world worth striving for. Even if you don’t choose to tip me, consider tipping someone, or if you are a producer of knowledge or art, market your own products and services through the generosity system.


Finally, if you aren’t in the mood or financial position to help fund this work, at least share it on your favorite social platforms!






via Planet Python http://archlinux.me/dusty/2013/06/29/creating-an-application-in-kivy-part-3/

Friday, June 28, 2013

Vasudev Ram: Windows msvcrt console I/O in Python

By Vasudev Ram



Python has some OS-specific modules (libraries), apart from its OS-independent ones.



There are OS-specific modules for Unix/Linux, Windows and Mac OS X - see the Python library documentation.



The msvcrt module of Python provides some Windows-specific routines.



This post shows one of the useful features of the msvcrt Python module.




# console_io.py

# Using the Python msvcrt module's getch function
# to display key codes and characters.

# Author: Vasudev Ram - http://www.dancingbison.com
# Version: v0.1

from msvcrt import getch, getche

def main():
print "This program prints keyboard characters"
print "and their codes as keys are pressed."
print "Press q (lower case) to quit."
while True:
print "Press a key: "
c = getch()
# Or use getche() to echo the characters as they are typed.
print " c =", c
print " ord(c) =", ord(c)
if c == 'q':
break

main()





Try running the above program with the command:



python console_io.py



after saving the program as console_io.py on Windows.



While the program is running, try pressing various keys on the keyboard, including the letter, digit, punctuation and other QWERTY keyboard keys, as well as the function keys (F1 to F10 or F12), other special keys like arrow keys and Ins / Del / Home / End / PgUp / PgDn, and see what happens.



If you don't already know why some of the behavior appears odd, try to figure it out by searching on the Web. Hint: Use terms like ASCII code, IBM PC key codes, scan codes as search keywords.



- Vasudev Ram - Dancing Bison Enterprises



Contact me











via Planet Python http://jugad2.blogspot.com/2013/06/windows-msvcrt-console-io-in-python.html

Brian Okken: If unittest is wrong, nose is wrong also

Regarding unittest being wrong, if unittest is wrong, nose is also, since it follows the same convention.


The post If unittest is wrong, nose is wrong also appeared first on Python Testing.






via Planet Python http://feedproxy.google.com/~r/PythonTesting/~3/-TbK1vHxfpw/

Ian Ozsvald: Visualising True Positives and False Positives against Features with scikit-learn

Here I’m starting to look into the errors caused in the social media brand disambiguator project. Below I look at true and false positives (correct and mistaken is-a-brand classifications) and plot them against the number of features that two different classifiers can use to calculate their class membership probabilities.


First I’m using the default LogisticRegression classifier. For both of these examples I’m using (1,3) n-grams (uni-, bi- and tri-grams) and a minimum document frequency of 2 occurrences for a term when building the Binary Vectorizer. The Vectorizer is constructed inside a 5-fold cross validation loop, so the number of features found varies a little per fold (you can see this in the two image titles – the title is generated using the final CV Vectorizer).


scikit_testtrain_apple_logreg_class_probs_vs_nbr_features


Class 1 (is-a-brand) results are ‘light blue’, they cluster towards the top of the graph (towards probability of 1 of being-in-class-1). Class 0 (is-not-a-brand) results cluster towards the bottom (towards a probability of 0 of being-in-class-1). There’s a lot of mixing around P(0.5) as the two classes aren’t separated terribly well.


We can see that the majority of the points (each circle ignoring which class it is in) have 1 to 10 features by looking along the x-axis, a few go up to over 50 features. Since the features include bi- and tri-grams we’ll see a lot of redundant features for these examples.


If we imagine drawing a threshold for is-class-1 above 0.89 then between all the cross validation test results (584 items across the 5 folds) I’d have 349 true positives (giving 100% precision, 59% recall). If I set the threshold to 0.78 then I’d have 422 true positives and 4 false positives (the 4 black dots above 0.78 giving 99% precision and 72% recall).


Now I repeat the experiment with the same Vectorizer settings but changing the classifier to Bernoulli Naive Bayes. The diagram shows a much stronger separation between the two classes:


scikit_testtrain_apple_bernoullinb_class_probs_vs_nbr_features


If I choose a threshold of 0.66 then I have 100% precision with 66% recall. If I choose 0.28 then I get 2 false positives giving 99.5% precision with 73% recall. It is nice to be able to visualise the class separations for each of the test rows, to both have a feel for how the classifier is doing and to view how changing the feature set (without modifying the classifier) changes the results.


Looking at these results I’d obviously want to diagnose what the false positive results look like, maybe that gives further ideas for features that could help to separate the two classes. The modifications to learn1_experiments.py are in this check-in on the github project.




Ian applies Data Science as an AI/Data Scientist for companies in Mor Consulting, founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.



via Planet Python http://ianozsvald.com/2013/06/28/visualising-true-positives-and-false-positives-against-features-with-scikit-learn/

Brian Okken: Perhaps unittest is wrong, and tearDown should always run

I’ve been looking at unittest fixtures, and seeing how they are treated in unittest, nose, and pytest. I started out with the assumption that unittest was correct. Now. I don’t think it is. From the unittest documentation For tearDown(): “This method will only be called if the setUp() succeeds, regardless of the outcome of the [...]


The post Perhaps unittest is wrong, and tearDown should always run appeared first on Python Testing.






via Planet Python http://feedproxy.google.com/~r/PythonTesting/~3/uk9nTBhImBc/

Dave Haynes: Ten steps towards native devops for your Python 3 application

The word devops was coined to unify the roles of tech-ops and development. Since then we've seen the appearance of tools which claim to automate devops tasks.


I have encountered unnecessary complexity in these 'solutions'. They're not always a good match for the Python ecosystem. They add new failure modes which aren't caught in testing because as tooling they fall outside our development cycle.


So I'd like to suggest that Python developers start designing for devops. We need to own the tasks of installation and configuration. We need to test the way they work in the same way we test the rest of our code.


With the features now available in Python 3, there has never been a better time to review how we do this. So here are my ten suggestions for getting there.



1. Make deployment a use case


Any code which is regularly modified and deployed needs to be designed with that in mind.


Upgrading software is more than just delivering packages. If there is a change to the schema or the business logic, data migrations become necessary. And migrations are risky operations; they require testing. So for those reasons:



  • Devops code goes in version control

  • Devops code specifies its dependencies

  • Devops code is packaged

  • Devops code has a release number


Sounds familiar? It's time to apply some rigour to the software we use to deploy and maintain our systems.


I'm of the opinion now that any deployed Python project should be a namespace package. In Python 3.3, namespaces are easier than ever to define. You should make the devops functionality of your project a subpackage of your namespace and treat it as part of your product.


If where you work, devops is performed by sysadmins, then you will need to begin educating them on what standards of quality you expect. There are ways to accomodate contributors to a namespace project, as I described recently.




2. Write an ops manual in Sphinx


Whilst we aspire to automation, it's inevitable in a changing environment that ad-hoc tasks are necessary.


It's a simple matter to jot down these commands in a text file. It's even better to maintain a proper operations manual under version control which can be compiled by Sphinx.


In tricky situations it's reassuring to have clear instructions to follow. Sphinx can help you organise your devops notes into an orderly set of processes, complete with syntax highlighting and hyperlinks to code modules.


So if your company dumps all its tech-ops snippets on a wiki (what happens when that VPN goes down?) you might like to float the idea of an offline manual, defined as reStructuredText in your devops package, and maintained in version control.




3. One file defines the deploy


No matter how you organise your computing assets, ultimately your configuration of them is expressed as a bunch of attributes which apply to nodes or groups of nodes. These attributes parameterise the scripts you run to manage them.


It helps very much if those attributes are all in one place, and can be verified easily by the human eye.


The config file should contain data not code. That means no logic at all, only perhaps variable substitution. You should be able to evaluate and view the parameters of any node from the command line.


I find .ini style files easy to read, and they are well supported by the Python standard library. For me, this is an advantage over alternatives like JSON (less easy on the eye) and YAML (needs an external library).




4. No big bangs


Don't be tempted by tools which promise you hands-free operation. There's no magic about them; they are made of software too, and when they break you will wish you understood how they worked.


Whenever the configuration process is inaccessible to you, it is out of your control. And you are the guy who's supposed to have control.


Beneath the veneer of one-click deployment should be an ordered sequence of steps which you understand very well. You should be able to halt the process of configuration of a node at any point you wish and continue it by hand.




5. Change on command


Most devops tools either favour ad-hoc modifications to systems (where you make isolated changes in support of correct operation) or a convergent model (hands-free mutation until a reference state is reached).


I'd like to suggest a third approach, which I'll call Change on Command. A ConC script is code which has access to the business logic of the application it delivers. It defines and performs a sequence of operations which will result in transition to a new working configuration of that application.


ConC is different from ad-hoc because it is part of the application codebase and it is tested as part of the release.


To show how simple this can be, I'll sketch out a basic ConC module for your project's devops subpackage.


We will use Holger Krekel's library execnet to do the remote invocation. This elegant little package has been around for a while, but fully supports Python 3. Its purpose is to run code in a Python interpreter on a remote machine and send back results. It is all we need to create our own devops framework.


We'll begin by defining some simple classes for control and reporting.




class Host(object):

local = "local"
remote = "remote"

class Status(object):

ok = "OK"
blocked = "BLOCKED"
failed = "FAILED"
stopped = "STOPPED"
error = "ERROR"
timedout = "TIMED OUT"

Job = collections.namedtuple("Job", ["host", "op"])

... and make a list of jobs we need to do. In this case, it's delivering and installing some Python packages. Each job's op attribute is a function or a function object. We'll go into some of the more interesting ones later.




jobs = [
Job(Host.remote, open_bundle),
Job(Host.remote, create_venv),
Job(Host.local, open_bundle),
Job(Host.local, mount_SSHFS),
Job(Host.local, copy_product),
Job(Host.remote, UnTar("setuptools-*.tar.gz")),
Job(Host.remote, SetupInstall("setuptools-*")),
Job(Host.remote, UnTar("pip-*.tar.gz")),
Job(Host.remote, SetupInstall("pip-*")),
Job(Host.remote, PipInstall("SQLAlchemy-0.8.1.tar.gz")),
Job(Host.local, unmount_SSHFS),
Job(Host.remote, close_bundle),
Job(Host.local, close_bundle),
]

Execnet lets us invoke a Python script twice at the same time; once on our workstation and once on the remote node. The two running modules know which is which and can send data to each other in the form of Python primitives.


The script on our workstation runs under the name __main__. We'll set up some logging and grab a reference to this module we're running.




if __name__ == "__main__":
logging.basicConfig(
format="%(asctime)s %(levelname)-7s %(host)-10s %(name)-10s %(message)s")

module = sys.modules[__name__]

Throughout this example, let's assume we have read the configuration file and that the data for a particular node is in the dictionary ncd.




...
user = ncd["admin"]
host = ipaddress.IPv4Address(ncd["sshd.ip"])
port = ncd["sshd.port"]
keyPath = os.path.expanduser(
os.path.join("~", ".ssh", "id_rsa-{}".format(ncd["op_key"])))

It's not a good idea to store passwords in plain text, so we'll prompt for those interactively.




...
sudoPass = getpass.getpass("Enter sudo password for {}:".format(user))

Here's the execnet bit. We create an execution group and launch the same module via ssh on to the remote node's Python 3 interpreter.




...
execGroup = execnet.Group()
gw = execGroup.makegateway(
"ssh=-i {} -p {} {}@{}//python=python3".format(
keyPath, port, user, host))

channel = gw.remote_exec(module)

Then we'll send that node's configuration data and the password required for the superuser. After this setting up, we'll call a loop which runs our jobs in order.




...
channel.send(ncd)
channel.send(sudoPass)

rv = work_loop(chan, ncd, sudoPass)
sys.exit(rv)

The work loop visits each job in sequence. If it's a local task, it gets invoked on our workstation. If not, we send the index of the job over the channel to the remotely operating module. Then we wait for a response.




def work_loop(chan, ncd, sudoPass):
rv = 0
for n, job in enumerate(jobs):
lgr = logging.getLogger(job.op.__class__.__name__)

host = ncd["host"] = (ncd["name"] if job.host == Host.remote
else "localhost")

if job.host == Host.remote:
chan.send(n)
m, status = chan.receive()
else:
m, status = n, job.op(ncd, sudoPass)

lgr.info(status, extra={"host": host})
if status not in (Status.ok, Status.stopped):
rv = 1
break
return rv

Remember that the same module is running on the node, and that the list of jobs is defined there too. When it starts up remotely, it should receive the configuration data and the sudo password. Execnet runs the remote module under the name __channelexec__.




if __name__ == "__channelexec__":
ncd = channel.receive()
sudoPass = channel.receive()

Then we enter a loop, awaiting the instruction to run a job. When we've invoked the defined operation, we return the result back across the channel.




...
while True:
n = channel.receive()
job = jobs[n]
status = job.op(ncd, sudoPass)
channel.send((n, status))

That's all we need to define an ordered sequence of operations which can be coordinated between a local and a remote node.




6. Lock operations with the bundle


When we're deploying software, we have to transfer our files to the remote node. But it's usually not just one file only. If our project has dependencies, we'll need to supply them too; they are our vendor packages. I call this collection of packages the bundle.


So the first job of a deployment is to create a directory on the node to hold those files. And actually, this is a useful thing to do even if there are no files to transfer at all. The bundle directory acts as a lock, telling us that the node is in the process of reconfiguration.


Here's the first function referenced in the job list. It simply creates a directory in the home path of the user. Remember, this is a job which runs on both the local and the remote node.




def open_bundle(ncd, sudoPass):
try:
locn = os.path.expanduser(os.path.join("~", ncd["bundle"]))
os.mkdir(locn)
except OSError:
return Status.blocked
else:
return Status.ok

And here's the function we use to remove the bundle. It's always the last thing we do:




def close_bundle(ncd, sudoPass):
try:
locn = os.path.expanduser(os.path.join("~", ncd["bundle"]))
shutil.rmtree(locn, ignore_errors=False)
except OSError:
return Status.failed
else:
return Status.ok



7. SSHFS simplifies delivery


With the bundle in place, we can start to move our files across. I always used to do this with scp, but I've recently discovered another solution which is much neater: sshfs.


SSHFS works over SFTP. You don't have to install anything extra on the node, but you'll want to put the sshfs package on your workstation. Then you can mount and unmount the remote bundle with a single command.


The advantage of this is that if you have to pause the deploy for any reason, you can then manually modify the bundle while it is mounted. It's your bridge to the node while the deploy is under way.


Here's the function I use to mount the bundle locally:




def mount_SSHFS(ncd, sudoPass):
keyPath = os.path.expanduser(
os.path.join("~", ".ssh", "id_rsa-{}.pub".format(ncd["op_key"])))
sshCommand = "ssh_command='ssh -i {keyPath}' ".format(keyPath=keyPath)

node = ipaddress.IPv4Address(ncd["sshd.ip"])
port = ncd["sshd.port"]
tgt = os.path.expanduser(os.path.join("~", ncd["bundle"]))
cmd = ("sshfs -o {ssh_command}"
"-p {port} {admin}@{node}:{bundle} {tgt}").format(
admin=ncd["admin"], bundle=ncd["bundle"], node=node, port=port,
ssh_command=sshCommand, tgt=tgt)

p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
shell=True)
out, err = p.communicate()
if out:
return Status.failed
else:
return Status.ok

The unmounting operation is a subprocess call of fusermount -u ~/bundle.




8. Use the venv module


Apart from the obvious syntax changes, the most exciting developments in Python 3 for me are those which reinforce its role as a web language. I sneaked in some use of the new ipaddress module earlier on. Pluggable event loops is a very welcome proposal that I'm looking forward to seeing in Python 3.4.


The standard library now has a module for creating isolated Python environments (virtualenvs). This really means they are now the officially sanctioned way of deploying your app.


Here's a function which creates the virtual environment on a remote node.




def create_venv(ncd, sudoPass):
then = time.time()
time.sleep(0.2)
locn = os.path.expanduser(os.path.join("~", ncd["venv"]))
bldr = venv.EnvBuilder(
system_site_packages=False,
clear=True)
bldr.create(locn)
if os.path.getmtime(locn) > then:
return Status.ok
else:
return Status.failed



9. Keep build tools out of production


From time to time I hear I should be using debian packages to deploy my code. In case you think that's true, here's the perfect example of why it's not.


If you install the Ubuntu packages python-pip or python-setuptools they will pull in python-dev and after several minutes you will discover you have the entire gcc toolchain on your production server.


This is not what we want.


Rather, you should include source packages of setuptools and pip in your bundle. Install them into your virtualenv with setup.py install.


So long as your application is pure Python it can be installed from a source package. You should prefer this way over eggs, since their days are numbered in the Python 3.4 timeframe. In 2014 we should begin to see adoption of the new wheel format instead.




10. Write once, test everywhere


I mentioned testing earlier on. Testing is the greater part of Engineering. Here are some of the levels of testing we need to be aware of:



  • Unit tests

  • Integration tests

  • Migration tests

  • Functional tests

  • Load tests

  • Monitoring


The unittest module is useful in many of these scenarios. Although it can cause confusion when people claim that the 'unitests' take several minutes to run each. I try and acknowledge the purpose I'm using unittest for when I write a module, so if it's really a functional test I'll write:




import unittest as functest

Remember execnet? Well, if it executes a module on a node it can run a test on a node.




import unittest as nodetest

I've started to do this a lot recently and it's very liberating. You can run checks against a newly paved node, or on a regular basis as part of a credentialed scanning regime.


Here's a quick example, which is useful because it's the first to make use of the sudo password. It happens to check that an iptables module has been loaded with the correct options.




class FirewallChecks(nodetest.TestCase):

.
.

def test_xt_recent_module_loaded(self):
p = subprocess.Popen(
["sudo", "-S", "cat",
"/sys/module/xt_recent/parameters/ip_pkt_list_tot"],
stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL)
out, err = p.communicate("{}\n".format(self.sudoPass).encode("utf-8"))
val = out.decode("utf-8").strip()
self.assertEqual("35", val)

It's a moderately simple task to write a test runner which will send the results back down the execnet channel to a controlling host.




Notes


The patch required to apply ssh options for port numbers and key files is not yet available in execnet trunk. But you can apply it at run-time with this code:




def patch_execnet():
""" Patch execnet to accept extended ssh arguments """
import execnet
import execnet.gateway_io

def ssh_args(spec):
remotepython = spec.python or "python"
args = ["ssh", "-C"]
args.extend(spec.ssh.split())
remotecmd = '{} -c "{}"'.format(
remotepython, execnet.gateway_io.popen_bootstrapline)
args.append(remotecmd)
return args

execnet.gateway_io.ssh_args = ssh_args
return execnet

Thanks to Holger Krekel and everyone who develops execnet.




Commercial


I am currently available for consulting work. You can hire me through Thuswise Ltd.







via Planet Python http://hwit.org/ten-steps-towards-native-devops-for-your-python-3-application.html

MiniTube Adds Account-Free Subscription, Heads to Ubuntu Software Centre

MiniTube

MiniTube on the Ubuntu Desktop



MiniTube has been updated with a rather nifty new feature: account-free channel subscriptions.


Version 2.1 of the Adobe Flash-free desktop video player lets you directly subscribe to YouTube channels without needing to login with a YouTube account.


Even better for ubuntu users, when a new video is available on a subscribed channel a notification bubble will appear.


This latest update also features the following changes:



  • VEVO video playback fixed

  • Faster startup

  • Improved playlist

  • Skipping to the next video now works on Linux


Heading to Ubuntu Software Center


MiniTube 2.1 will be available to install from the Ubuntu Software Centre in the coming days.


App developer Flavio Tordini hopes that by distributing updates directly via the Software Center will allow ‘easier to install and more up-to-date Minitube for Ubuntu users’.


It’s important that I point out that the updated version will need to be installed manually. If you have an existing version of MiniTube installed, from either a PPA, .Deb or the ubuntu repositories, you should uninstall it prior to upgrading.


MiniTube 2.1 is not currently live on the Software Centre, but we’ll let you know as soon as it is. In the meantime you can browse the source code on GitHub.


MiniTube Source


The post MiniTube Adds Account-Free Subscription, Heads to Ubuntu Software Centre appeared first on OMG! Ubuntu!.







via http://feedproxy.google.com/~r/d0od/~3/kM4ekHXaetM/minitube-adds-account-free-subscription-heads-to-ubuntu-software-centre

Thursday, June 27, 2013

Brian Okken: Reworking categories

I just went through and reworked some of my categories. Trying to have pytest stuff under pytest category, etc. Mostly, this meta tinkering shouldn’t affect you at all. However, I’m not a WP expert. So, if there’s a link broken, or something just seems wacky with the site, please let me know.


The post Reworking categories appeared first on Python Testing.






via Planet Python http://feedproxy.google.com/~r/PythonTesting/~3/RzBQCN_F0sc/

Mike Driscoll: wxPython: How to Communicate with Your GUI via sockets

I sometimes run into situations where it would be nice to have one of my Python scripts communicate with another of my Python scripts. For example, I might want to send a message from a command-line script that runs in the background to my wxPython GUI that’s running on the same machine. I had heard of a solution involving Python’s socket module a couple of years ago, but didn’t investigate it until today when one of my friends was asking me how this was done. It turns out Cody Precord has a recipe in his wxPython Cookbook that covers this topic fairly well. I’ve taken his example and done my own thing with it for this article.


wxPython, threads and sockets, oh my!


Yes, we’re going to dive into threads in this article. They can be pretty confusing, but in this case it’s really very simple. As is the case with every GUI library, we need to be aware of how we communicate with wxPython from a thread. Why? Because if you use an unsafe wxPython method, the result is undefined. Sometimes it’ll work, sometimes it won’t. You’ll have weird issues that are hard to track down, so we need to be sure we’re communicating with wxPython in a thread-safe manner. To do so, we can use one of the following three methods:



  • wx.CallAfter (my favorite)

  • wx.CallLater (a derivative of the above)

  • wx.PostEvent (something I almost never use)


Now you have the knowledge of how to talk to wxPython from a thread. Let’s actually write some code! We’ll start with the thread code itself:



# wx_ipc.py

import select
import socket
import wx

from threading import Thread
from wx.lib.pubsub import Publisher

########################################################################
class IPCThread(Thread):
""""""

#----------------------------------------------------------------------
def __init__(self):
"""Initialize"""
Thread.__init__(self)


self.socket = socket.socket(socket.AF_INET,
socket.SOCK_STREAM)
# Setup TCP socket
self.socket.bind(('127.0.0.1', 8080))
self.socket.listen(5)
self.setDaemon(True)
self.start()

#----------------------------------------------------------------------
def run(self):
"""
Run the socket "
server"
"
""
while True:
try:
client, addr = self.socket.accept()

ready = select.select([client,],[], [],2)
if ready[0]:
recieved = client.recv(4096)
print recieved
wx.CallAfter(Publisher().sendMessage,
"update", recieved)

except socket.error, msg:
print "Socket error! %s" % msg
break

# shutdown the socket
try:
self.socket.shutdown(socket.SHUT_RDWR)
except:
pass

self.socket.close()

I went ahead and copied Cody’s name for this class, although I ended up simplifying my version quite a bit. IPC stands for inter-process communication and since that’s what we’re doing here, I thought I’d leave the name alone. In this call, we set up a socket that’s bound to 127.0.0.1 (AKA the localhost) and is listening on port 8080. If you know that port is already in use, be sure to change that port number to something that isn’t in use. Next we daemonize the thread so it will run indefinitely in the background and then we start it. Now it’s basically running in an infinite loop waiting for someone to send it a message. Read the code a few times until you understand how it works. When you’re ready, you can move on to the wxPython code below.


Note that the threading code above and the wxPython code below go into one file.



# wx_ipc.py

########################################################################
class MyPanel(wx.Panel):
""""""

#----------------------------------------------------------------------
def __init__(self, parent):
"""Constructor"""
wx.Panel.__init__(self, parent)

btn = wx.Button(self, label="Send Message")
btn.Bind(wx.EVT_BUTTON, self.onSendMsg)

self.textDisplay = wx.TextCtrl(self, value="", style=wx.TE_MULTILINE)

mainSizer = wx.BoxSizer(wx.VERTICAL)
mainSizer.Add(self.textDisplay, 1, wx.EXPAND|wx.ALL, 5)
mainSizer.Add(btn, 0, wx.CENTER|wx.ALL, 5)
self.SetSizer(mainSizer)

Publisher().subscribe(self.updateDisplay, "update")

#----------------------------------------------------------------------
def onSendMsg(self, event):
"""
Send a message from within wxPython
"
""
message = "Test from wxPython!"
try:
client = socket.socket(socket.AF_INET,
socket.SOCK_STREAM)
client.connect(('127.0.0.1', 8080))
client.send(message)
client.shutdown(socket.SHUT_RDWR)
client.close()
except Exception, msg:
print msg

#----------------------------------------------------------------------
def updateDisplay(self, msg):
"""
Display what was sent via the socket server
"
""
self.textDisplay.AppendText( str(msg.data) + "\n" )

########################################################################
class MyFrame(wx.Frame):
""""""

#----------------------------------------------------------------------
def __init__(self):
"""Constructor"""
wx.Frame.__init__(self, parent=None, title="Communication Demo")
panel = MyPanel(self)

# start the IPC server
self.ipc = IPCThread()

self.Show()

if __name__ == "__main__":
app = wx.App(False)
frame = MyFrame()
app.MainLoop()

Here we set up our user interface using a simple frame and a panel with two widgets: a text control for displaying the messages that the GUI receives from the socket and a button. We use the button for testing the socket server thread. When you press the button, it will send a message to the socket listener which will then send a message back to the GUI to update the display. Kind of silly, but it makes for a good demo that everything is working the way you expect. You’ll notice we’re using pubsub to help in sending the message from the thread to the UI. Now we need to see if we can communicate with the socket server from a separate script.


So make sure you leave your GUI running while you open a new editor and write some code like this:



# sendMessage.py
import socket

#----------------------------------------------------------------------
def sendSocketMessage(message):
"""
Send a message to a socket
"
""
try:
client = socket.socket(socket.AF_INET,
socket.SOCK_STREAM)
client.connect(('127.0.0.1', 8080))
client.send(message)
client.shutdown(socket.SHUT_RDWR)
client.close()
except Exception, msg:
print msg

if __name__ == "__main__":
sendSocketMessage("Python rocks!")

Now if you execute this second script in a second terminal, you should see the string “Python rocks!” appear in the text control in your GUI. It should look something like the following if you’ve pressed the button once before running the script above:


wxipc


Wrapping Up


This sort of thing would also work in a non-GUI script too. You could theoretically have a master script running with a listener thread. When the listener receives a message, it tells the master script to continue processing. You could also use a completely different programming language to send socket messages to your GUI as well. Just use your imagination and you’ll find you can do all kinds of cool things with Python!


Additional Reading



Download the Source


Note: This code was tested using Python 2.6.6, wxPython 2.8.12.1 on Windows 7







via Planet Python http://www.blog.pythonlibrary.org/2013/06/27/wxpython-how-to-communicate-with-your-gui-via-sockets/