I love paella, but hate seafood. My operating theory on the paella is that, despite all outward European appearances, I am actually Asian, and paella is just European fried rice.
So I created a paella recipe that has no seafood in it. It is, in the words of culinary gastronomy, quite nommy. But, as I am a software person and not a writer, this is just a meta-recipe (that is, it tells you what to make, but assumes you know how to make it).
This will feed like 8 people, so you're warned that you need a biiiiig pan for the last part. When I made this I cooked it in two passes, because I don't have a biiiiig pan.
Here goes:
1 cup (uncooked) rice
1 can chicken broth
2 pounds chicken
10oz chorizo sausage
1 yellow onion
4 cloves garlic
2 Tbsp capers
olive oil (I recommend Filippo Berio; it doesn't beat you over the head with olive flavor)
RICE:
- cook rice, using the broth instead of water.
CHICKEN:
- dice up the chicken and sauté in olive oil in a shallow pan. It doesn't have to be cooked all the way through (it'll be finished off later) but it should be mostly done.
- set aside
CHORIZO:
- pan-fry the chorizo, in more olive oil. This one I would cook all the way through.
- set aside
ONIONS AND GARLIC:
- chop up the onions and garlic
- sauté them in even more olive oil.
- set aside
By the time you're done with all that, the rice should be ready. Put everything (and the capers) in a big pan, throw in more olive oil, and stir. Cover, then let it sit on medium heat for another 15 minutes, stirring occasionally.
Nom nom nom nom.
Sunday, December 5, 2010
Tuesday, November 16, 2010
Fun with function call profiling
On a lark, I decided to construct an experiment to see how much (if any) difference in runtime there was for various permutations on function calls in C++. I wanted to test objects vs. pointers, and static vs. dynamic binding. This was not a rigorously defined experiment, and I was honestly expecting not much difference at all. However, I was shocked to discover that these variations create some substantial differences in runtime. Further, this finding is interesting because function call overhead is not reported by profiling.
I defined four trivial foo() functions that do nothing and return nothing:
The main() code called foo() 50 billion times in each of the following manners:
main() was engineered to make sure that the dominant factor in the runtime was the function call overhead -- the code is small enough that the program should be immune from I/O delay and cache misses, and I compiled with -O0 so that nothing would get inlined. I made one empty loop just so that I could subtract the loop overhead from the runtime. I used times() to get the runtime, and I ran everything 6 times to average the results. Here's what I found:

Let's go through the list of dismaying findings!
First: pointers cost runtime. Look at the difference between unvirtual.foo() and unvirtual->foo() -- a jump of 35% in runtime overhead!
Second: virtual functions cost you dearly. Look at the jump between unvirtual->foo() and base=>derived->foo(), which is an astounding 177% increase in runtime overhead.
Third: static derived objects incur the same cost as pointers. I do not really understand this case. Any static object should have static binding, so the compiler should be able to figure out at compile-time which function to call, and there should be no run-time cost at all. But if there IS a runtime element to resolving which function to call, then it should have a significant overhead cost like the others. I'm not sure why a static derived object incurs a small additional cost, but it was consistent across the 6 runs.
One huge zit on this data set is the cost of the global ("::foo()") function. As a global function, there's no dynamic component to it. In fact, it should be the absolute cheapest function call because it doesn't have to place the implicit 'this' variable onto the stack like all the others do. However, it has one of the worst runtimes, falling right in the middle of all the dynamic types! This defies explanation. It also had the largest swing in its 6 runtimes, ranging from 13k to 27k, a spread of over 100%. (All of the other runtimes correlated to within a percent.)
So what do we take away from all this? I think the important lesson is the relatively high cost of virtual functions, even when it's possible to figure out which function to call at compile-time. However, keep in mind that function call overhead is usually a trivial part of realistic program runtime, so don't interpret this data to mean you should avoid virtual functions in any speed-critical application.
I defined four trivial foo() functions that do nothing and return nothing:
void foo() { return; }
class Unvirtual {
public:
void foo() { return; }
};
class Base {
public:
virtual void foo() { return; }
};
class Derived: public Base {
public:
void foo() { return; }
};
The main() code called foo() 50 billion times in each of the following manners:
Unvirtual o; o.foo();
Base o; o.foo();
Derived o; o.foo();
Unvirtual *o = new Unvirtual; o->foo();
Base *o = new Derived; o->Base::foo();
Derived *o = new Derived; o->foo();
::foo();
Base *o = new Base; o->foo();
Base *o = new Derived; o->foo();
main() was engineered to make sure that the dominant factor in the runtime was the function call overhead -- the code is small enough that the program should be immune from I/O delay and cache misses, and I compiled with -O0 so that nothing would get inlined. I made one empty loop just so that I could subtract the loop overhead from the runtime. I used times() to get the runtime, and I ran everything 6 times to average the results. Here's what I found:

Let's go through the list of dismaying findings!
First: pointers cost runtime. Look at the difference between unvirtual.foo() and unvirtual->foo() -- a jump of 35% in runtime overhead!
Second: virtual functions cost you dearly. Look at the jump between unvirtual->foo() and base=>derived->foo(), which is an astounding 177% increase in runtime overhead.
Third: static derived objects incur the same cost as pointers. I do not really understand this case. Any static object should have static binding, so the compiler should be able to figure out at compile-time which function to call, and there should be no run-time cost at all. But if there IS a runtime element to resolving which function to call, then it should have a significant overhead cost like the others. I'm not sure why a static derived object incurs a small additional cost, but it was consistent across the 6 runs.
One huge zit on this data set is the cost of the global ("::foo()") function. As a global function, there's no dynamic component to it. In fact, it should be the absolute cheapest function call because it doesn't have to place the implicit 'this' variable onto the stack like all the others do. However, it has one of the worst runtimes, falling right in the middle of all the dynamic types! This defies explanation. It also had the largest swing in its 6 runtimes, ranging from 13k to 27k, a spread of over 100%. (All of the other runtimes correlated to within a percent.)
So what do we take away from all this? I think the important lesson is the relatively high cost of virtual functions, even when it's possible to figure out which function to call at compile-time. However, keep in mind that function call overhead is usually a trivial part of realistic program runtime, so don't interpret this data to mean you should avoid virtual functions in any speed-critical application.
Monday, March 22, 2010
bison warning: "conflicting outputs to file ..."
In case anyone cares, I figured out why I was getting this message from bison:
It turns out that I was inadvertently giving the foo.tab.hh file as bison's -o parameter. Since the -o is supposed to be the .cc file ("foo.tab.cc"), bison derives the .hh file by chopping off the extension (to get "foo.tab.") and then adding "hh" (to get "foo.tab.hh"). Thus bison was writing both the .hh and .cc files to the same place.
The reason I was doing this is because my Makefile was messed up. I was using generic rules to build flex/bison files:
Of course the flex run needs the foo.tab.hh file for all the state definitions, so the bison build is done first. But since it was foo.tab.hh that triggered the build, "$@" is set to the .hh file instead of the .cc file.
The only reason I'm babbling about this is because there's no good explanation of that bison message anywhere on google. Hopefully I'll save someone else some anguish. :)
foo.y: warning: conflicting outputs to file `foo.tab.hh'
It turns out that I was inadvertently giving the foo.tab.hh file as bison's -o parameter. Since the -o is supposed to be the .cc file ("foo.tab.cc"), bison derives the .hh file by chopping off the extension (to get "foo.tab.") and then adding "hh" (to get "foo.tab.hh"). Thus bison was writing both the .hh and .cc files to the same place.
The reason I was doing this is because my Makefile was messed up. I was using generic rules to build flex/bison files:
%.l.cc: %.l %.tab.hh
flex ... -o $@
%.tab.cc %.tab.hh: %.y
bison ... -o $@
Of course the flex run needs the foo.tab.hh file for all the state definitions, so the bison build is done first. But since it was foo.tab.hh that triggered the build, "$@" is set to the .hh file instead of the .cc file.
The only reason I'm babbling about this is because there's no good explanation of that bison message anywhere on google. Hopefully I'll save someone else some anguish. :)
Tuesday, February 16, 2010
a very clean keyboard
I heard, from multiple sources, that one could clean one's keyboard but putting it, face down, on the top rack of the dishwasher.
However, I now hear that Snopes debunked that (though I cannot find the exact article).
Further, I now have direct evidence that it does not, in fact, work. Sparkley clean though the keyboard is, it doesn't do me any good when pushing the keys does nothing.
Oh well. The experiment was still worth it. :)
However, I now hear that Snopes debunked that (though I cannot find the exact article).
Further, I now have direct evidence that it does not, in fact, work. Sparkley clean though the keyboard is, it doesn't do me any good when pushing the keys does nothing.
Oh well. The experiment was still worth it. :)
Subscribe to:
Comments (Atom)
