The jam.
Year: 2012
Cannabis criminalization helps law enforcement (perform unconstitutional searches)
Opponents of cannabis decriminalization often state we should keep it criminalized in order to help law enforcement catch bad guys, and indeed it serves as an important tool for justifying searches on individuals and premises. After all, these searches may turn up more harmful criminal activities or individuals with warrants. LEO’s will often admit that in many cases they are not really after the pot and may even ignore the offense if no other offenses are found.
From a public safety standpoint, allowing “I smelled marijuana” to serve as probable cause for search may on net improve safety, but we should reject this notion because these searches are basically unconstitutional. Cannabis use, after all, is not what most officers are really after; it’s a justification.
The Fourth Amendment was not created by accident; the power to search without cause can and often is abused by LEOs, and the especially militarized flavor of drug raids in the U.S. is often needlessly violent and deadly.
When cannabis is no longer criminalized, yes, searching individuals based on a hunch (without real cause) will be harder—the goal of the Bill of Rights was not to make policing easy—but consider if we had never criminalized cannabis and it had at least as many users as it currently does. Knowing what we now know about the mild harms of the drug, would we really choose to turn at least several million people into regular criminals in order to give LE the power to search them without cause and occasionally using violent SWAT raids?
No we would not and should not. If anything this “LE tool” argument is a reason to decriminalize.
Designing a Highly Reusable HTML Component
The UF College of Education uses a header/footer template designed to be applied to any page, and we use this across several applications such as WordPress, Moodle, Elgg, and Drupal. Changes can be propagated quickly to all sites, and adding the template to new PHP apps is trivial.
If you need to create an HTML component that can be reused across a wide set of sites/apps, these guidelines might help.
Avoid HTML5 elements if you can
HTML5 elements like header
must be accompanied by JS to fix compatibility in old browsers. Sticking to HTML4 also helps with validation under HTML4/XHTML doctypes. Of course, if you’ll only be deploying to HTML5 sites that already have the IE shim, go ahead and use the best markup elements for the job.
Guard the host markup against the component CSS
Any component CSS selectors that match host markup elements can cause massive problems in the host application, and if this component is applied across an entire site, it’s very difficult to predict the impact. The key then is following a few simple rules for the component CSS:
- Each class/id name must be sufficiently unique such that there’s practically no chance of name collision. Unfortunately even selectors like
.hidden
and.clearfix
could be implemented in different ways in the host app and this could cause problems. Using a constant prefix in every name might help. - Each selector must include at least one of the component classes/ids.
- Avoid using a CSS reset/normalizer. If you must, make sure each selector follows the above rules so the effect of this is limited in scope to the component.
- Selectors must not match non-component elements. E.g. the selector
#component-root + div
should not be used because it would select a DIV element after the component. - Take care to avoid obscuring elements in the host page. E.g. negative margins could pull the component over a host element.
Guard the component markup against the host CSS
Similarly, host CSS could break the desired styling of the component markup.
- Test the component in a wide variety of pages and applications. Especially test pages that use common CSS resets and normalizers, and that have a lot of element-only selectors in the CSS.
- When interactions occur, make the affected element’s selector more specific until the component CSS “wins”. As always, test across the browsers you need to support; IE7 still has some specificity bugs for the selectors it understands, if you need to care about that.
Javascript Tips
Expose as little to the global namespace as possible
E.g., define all necessary functions and variables inside an anonymous function that is executed:
!function () { // your code here ... // explicitly expose an API this.myComponentAPI = api; }();
Document your script’s dependencies and let the implementor supply those
Automatically including JS libraries may break the host app. Consider the case of jQuery: Many plugins extend the jQuery
object, so redefining it removes those added functions (actually stores them away, but it will break the host app nonetheless). Don’t assume the user did this right. Wrap your functionality in a condition that first tests for the presence of the the library/specific features you need, and make it easy for the implementer to realize the problem if they have a console open.
Here’s an example of how to test for jQuery’s on
function:
if (this.jQuery && jQuery.fn.on) { // code } else if (this.console && console.log) { console.log('Requires jQuery.on, added in version 1.7'); }
Assume the component could be embedded after the page loads, and multiple times
Carefully consider the initialization process your component requires. In some cases it’s reasonable to leave the initialization to be triggered by the implementer. If you do automatically use DOMReady functions like jQuery’s ready()
, consider allowing the implementer to cancel this and initialize later.
Wedding Mixes: Moose
Moose “This River Never Will Run Dry” [marry in the morning mix]
In this mix:
- More balanced volume across the song (you can hear the intro without having to turn it down several times later). This is a simple volume envelope, so it didn’t squash the dynamics any more that they were already.
- Shortened outro without the screeching halt at the end. Yes, some will find this blasphemous. Judge away.
Decouple User and App State From the Session
When building software components, you want them to be usable in any situation, and especially inside other web apps. The user of your component should be able to inject the user ID and other app state without side effects, and similarly be able to pull that state out after use.
That’s why, beyond all the standard advice (GRASP SOLID), you’d be wise to avoid tight coupling with PHP Sessions.
PHP’s native1 session is the worst kind of singleton.
It’s accessible everywhere; modifiable by anyone; and, because there’s no way to get the save handlers, there’s no way to safely close an open session, use it for your own purposes, and reopen it with all the original state. Effectively only one system can use it while it’s open.
The Take Home
- Provide an API to allow injecting/requesting user and application state into/from your component.
- Isolate session use in a “state persister” subcomponent that can be swapped out/disabled, ideally at any time during use.
1Shameless plug: I created UserlandSession, as a native session replacement. You can use multiple at the same time, and there’s no global state2 nor visibility. This is not to suggest that you use it in place of native sessions, but it’s available in a pinch or for experimentation.
2Yes, I know cookies and headers are global state. UserlandSession is not meant to solve all your problems, pal.
Elgg Plugin Tip: Make Your Display Queries Extensible With Plugin Hooks
If you’re building an Elgg plugin that executes queries to fetch entities/annotations/etc. for display, odds are someone else will one day need to alter your query, e.g. to change the LIMIT or ORDER BY clauses. Depending on where your query occurs, he/she may have to override a view, replace an action, replace a whole page handler, or have no choice but to alter your plugin code directly. There’s a better way.
My Moodle Page, Now With 99.6% Fewer Queries
My work recently upgraded from Moodle 1.9 to 2.3, and some users were experiencing slow page loads while the site was zippy for others. Today we discovered that, for some users, the My Moodle dashboard page was requiring several thousands of DB queries. For one user, enrolled in four courses, the page required over 14,000 queries. I guess it could be worse: one user reported over 95,000!
This was completely unacceptable; this page is a common navigation point for all users, and every user is forwarded there on login.
With xdebug and Autosalvage rocking, it only took me about three hours to rework the page so that only the course links are displayed by default, and a button is provided beside each to load that course’s activities into the page via Ajax. Since most users just use the page for navigation between courses, this tradeoff seems well worth the performance gain. Now this page–without displaying activities–is down to ~60 queries for every user (sadly average for a Moodle page).
I suspect that loading the activity list for a large course will still take a performance bite, but in my limited testing it seemed pretty instantaneous–yes, there’s a reason why modern apps are built on Ajax. Although good work has been done to cache front-end files, Moodle still seems to be in serious need of query reduction optimization when building HTML pages.
After getting some feedback over the weekend, I’ll release this patch for other Moodle providers. Our theme uses jQuery and the Javascript side of this was maybe 10 lines, but I imagine it would need to be ported to YUI to get into core.
Bad Analogies Lead to Bad Policies
I was forwarded an e-mail that made a terrible analogy (my emphasis):
Here’s another way to look at the Debt Ceiling: … You come home from work and find … your home has sewage all the way up to your ceilings.
Sewage would make a home immediately uninhabitable and actively damage the value of the home and its contents. Your first clue that this analogy is a failure is to see that the U.S. debt doesn’t do this. During deficits we’ve had great periods of growing prosperity and people from all over want to come here.
Consider also the huge debt we incurred fighting WWII. If debt were like sewage, we would’ve done more to stay out of the war, or the resulting debt would’ve led us to ruins by the 50s. Instead, high employment (fueled by deficit spending) left us with a strong economy able to quickly pay back that debt while prospering.
U.S. debt is–not coincidentally–more like a credit card. Yes, we’d prefer not to need one, and we must keeping paying on it, but besides the payment, it incurs no other short term liabilities, and no one is demanding we pay it off tomorrow, next year, or in our lifetimes. And unlike most consumers, the U.S. is known to be the most trustworthy borrower in the world, so creditors treat us well, never hitting us with surprise fees (i.e. it doesn’t have the risk associated with a consumer credit card), and in fact they’re willing to loan to us right now at almost no interest on current purchases. (We’ll come back to that.)
We know that credit is helpful during emergencies, regardless of your debt level. If sewage were flooding your home and you had only a credit card to pay with, you’d still call the plumber and charge it.
Back in the real world, a more apt metaphor for flooding in your home is our high unemployment. It is actively damaging human capital as people lose their skills, homes, and families from financial stress. And this damage is not limited to those directly affected. Being unemployed for long is known to reduce the wages you earn for the rest of your working life, which reduces the taxes you can pay, and your productivity, and this makes the country’s long term revenue & growth problems even worse.
The good news is that the waste water is receding. The bad news is that it’s receding slowly, while destroying the value of our nation’s workforce.
Even worse, irrational fears of deficits have distracted us from the real emergency. Politicians and hard money economists have convinced us to choose to accept damagingly high unemployment to avoid using the credit card, and–making the situation even more head-slappingly absurd–is that new purchases on our credit card accrue virtually no interest. We could borrow for almost nothing to help get people back to work doing real productive, useful things growing the economy, but instead we’re a madman yelling, “can’t use the credit card!”, while sewage floods in.
It’s unsurprising that this recovery is slow because it’s the only one in recent history where we’ve simultaneously slashed government spending (mainly at the state/local levels). While politicians complain about our out of control spending–mostly safety net spending that will recede after the recession–we’re fully suffering the effects of austerity, and just like the UK is finding out, it’s painful and ineffective.
I’ve been very much won over by the arguments and real evidence presented in support of Keynes. The hard money advocates have some compelling, ideologically pure arguments, but their models don’t seem to support what we see in the real world during recessions, and getting this wrong hurts millions of people, not just economists’ reputations. Krugman can be a annoyingly partisan hack, but even many right-leaning economists know he’s right on the economics and he regularly posts real data from the economy to prove it. And I’m certain most politicians secretly agree. Watch what happens when budget cuts are proposed in their area. Their argument becomes, “If we cut this (unnecessary) military project it will destroy the town.” And he’s right; government spending can be just as important as consumer spending in supporting an economy. Consumers without jobs can’t buy much and demand drives everything.
That doesn’t mean we should keep unnecessary projects, but at this time there are many useful tasks we should be hiring the unemployed to do. For many it was the jobs they were already doing before state budget cuts–like teaching and policing. This is the absolute best time to bail out state governments and to take up needed infrastructure projects that will have to be done eventually.
Very sadly, neither party will be making the race about the urgency of high unemployment, perhaps because it’s out of sight for them and the unemployed don’t fund SuperPACs. At the very least Obama has tried to pursue legislation like 2011’s American Jobs Act (killed by Republicans).
And despite all the hand-wringing about debt, neither presidential candidate is proposing serious plans to address it. Romney’s tax and growth plans are just fantasy and Obama’s don’t raise enough money to do much.
And if this post isn’t depressing enough, maybe David Frum will work for you.
In Support of Bloated, Heavyweight IDEs
I’ve done plenty of programming in bare-bones text editors of all kinds over the years. Free/open editors were once pretty bad and a lot of capable commercial ones have been expensive. Today it’s still handy to pop a change into Github’s web editor or nano. Frankly, though, I’m unconvinced by arguments suggesting I use text editors that don’t really understand the code. I believe that, independent of your skill level, you’ll produce better code and faster by using the most powerful IDE you can get your hands on.
To convince you of this, I’ll try to show how each of the following list of features, in isolation, is good for your productivity. Then it should follow that each one you work without will be lowering it. Also it’s important to note that leaving in place or producing bugs that must be fixed later, or that create external costs, reduces your real productivity, and “you” in the list below could also mean you six months from now or another developer. The list is not in any particular order, just numbered for reference.
- Syntax highlighting saves you from finding and fixing typos after compilation failures. In a language where a script file may be conditionally executed, like PHP, you may leave a bug that will have to be dug up by someone else after costing end users a lot of time. In rarer cases the code may compile but not do what you expected, costing even more time. SH also makes the many contexts available in code (strings, functions, vars, comments, etc.) significantly easier to see when scanning/scrolling.
- Having a background task scan the code can help catch errors that simple syntax highlighting cannot, since most highlighters are designed to expect valid syntax and may not show problems.
- Highlighting matching braces/parenthesis eases the writing and reading of code and expressions.
- IDEs can show the opening/closing lines of blocks that appear offscreen without you needing to scroll. Although long blocks/function bodies can be a signal to refactor, this can aid you in working on existing code like this.
- Highlighting the use of unknown variables/functions/methods can show false positives for problems, but more often signals a bug that’s hidden from sight: E.g. a variable declared above has been removed; the type of a variable is not what is expected; a library upgrade has removed a method, or a piece of code has been transplanted from another context without its dependencies. Missing these problems has a big future cost as these may not always cause compile or even runtime errors.
- Highlighting an unused variable warns you that it isn’t being used how it was probably intended. It may uncover a logic bug or mean you can safely remove its declaration, preventing you from later having to wonder what it’s for.
- Highlighting the violation of type hints saves you from having to find those problems at compile or run-time.
- Auto-completing file paths and highlighting unresolvable ones saves you from time-consumingly debugging 404 errors.
- Background scanning other files for problems (applying all the above features to unopened project files) allows you to quickly see and fix bugs that you/others left. Simply opening an existing codebase in a more capable IDE can reveal thousands of certain/potential code problems. If you’re responsible for that code, you’ve potentially saved an enormous amount of time: End users hitting bugs, reporting them, you reading, investigating, fixing, typing summaries, etc. etc. etc. This feature is like having a whole team of programmers scouring your codebase for you; a big productivity boost.
- Understanding multiple language contexts can help a great deal when you’re forced to work in files with different contexts embedded within each other.
- Parameter/documentation tooltips eliminate the need to look up function purpose, signatures, and return types. While you should memorize commonly used functions, a significant amount of programming involves using new/unfamiliar libraries. Leaving your context to look up docs imposes costs in time and concentration. Sometimes that cost yields later benefits, but often you’ve just forgotten the order of a few parameters.
- Jumping to the declaration of a function/variable saves you from having to search for it.
- Find usages in an IDE that comprehends code allows you to quickly understand where and how a variable/function is used (or mentioned in comments!) in a codebase with very little error.
- Rename refactoring can carefully change an identifier in your code (and optionally filenames and comments) across an entire project of files. This can also apply to CSS; when renaming a class/id, the IDE may offer to replace its usages elsewhere in CSS and HTML markup. The obvious benefits are time savings and reduction in the errors you might make using more simple string/regular expression replacements, but there are other gains: When the cost of changing a name reduces to almost nothing, you will be more inclined to improve names when needed. Better names can reduce the time needed to understand the code and how it should be used, and to recognize when it’s not being used well.
- Comprehension of variable/expression type allows the IDE to offer intelligent autocompletion options, reducing your time spent typing, fixing typing errors, and looking up property/method names on classes. But more than saving time, when an expected autocomplete option doesn’t appear, it can let you know that your variable/expression is not of the type that you think it is.
- IDEs can automatically suggest variables for function arguments based on type, so if you’re calling a function that needs a Foo and a Bar, the IDE can suggest your local vars of those types. This eliminates the need to remember parameter order or the exact names of your vars. Your role briefly becomes to check the work of the IDE, which is almost always correct. In strongly typed languages like Java, this can be a great boost; I found Java development in NetBeans to be an eye-opening experience to how helpful a good IDE can be.
- IDEs can grok Javadoc-style comments, auto-generate them based on code, and highlight discrepancies between the comments and the code. This reduces the time you spend documenting, improves your accuracy when documenting, and can highlight problems where someone has changed the signature/return type of a function, or has documented it incorrectly. IDEs can add support for various libraries so that that code can be understood by the IDE without being in the project.
- IDEs can maintain local histories of project files (like committing each change in git) so you can easily revert recent changes (or bring back files accidentally deleted!) or better understand the overall impact of your recent changes.
- IDEs can integrate with source control so you can see changes made that aren’t committed. E.g. a color might appear in the scroll bar next to uncommitted changes. You could click to jump to that change, mouse over to get a tooltip of the original code and choose to revert if needed. This could give you a good idea of changes before you switch focus to your source control tools to commit them. Of course being able to perform source control operations inside the IDE saves time, too.
- IDEs can maintain a local cache of code on a remote server, making it snappier to work on, reducing the time you’d spend switching to a separate SFTP app, and allowing you to adjust the code outside the IDE. The IDE could monitor local and remote changes and allow merging between the versions as necessary.
- IDEs can help you maintain code style standards in different projects, and allow you to instantly restyle a selection or file(s) according to those standards. When contributing to open source projects, this can save you from having to go back and restyle code after having your change rejected.
- Integrated debugging offers a huge productivity win. Inline debugging statements are no replacement for the ability to carefully step though execution with access to all variable contents/types and the full call stack. In some cases bugs that are practically impossible to find without debugging can be found in a few minutes with one.
- Integrated unit test running also makes for much less context switching when using test-driven development.
This list is obviously not exhaustive, and is geared towards the IDEs that I’m most familiar with, but the kernel is that environments that truly understand the language and your codebase as a whole can give you some powerful advantages over those that don’t. A fair argument is that lightweight editors with few bells and whistles “stay out of your way” and can be more responsive on large codebases–which is true–but you can’t ignore that there’s a large real productivity cost incurred in doing without these features.
For PHP users, PhpStorm* includes almost everything in the list, with Netbeans coming a close second. With my limited experience Eclipse PDT was great for local projects, but I’ve only seen basic syntax highlighting working in the Remote System Explorer. All three also fairly well understand Javascript, CSS, HTML, and to some extent basic SQL and XML DTDs.
*Full disclosure: PhpStorm granted me a copy for my work on Minify, but I requested it, and my ravings about it and other IDEs are all unsolicited.