Monday, April 8, 2019

PostgreSQL JDBC + SSL

This serves as a note to myself (and anyone else) on a simple way to connect to a SSL-enable PostgreSQL (9.4+) server using the standard java SSL properties:

Add sslfactory=org.postgresql.ssl.DefaultJavaSSLFactory to your connection string, and configure your JVM javax.net.ssl.* properties as you would for the majority of java applications.

E.g.:

jdbc:postgresql://dbhost:5432/testdb?ssl=true&sslmode=verify-ca&sslfactory=org.postgresql.ssl.DefaultJavaSSLFactory

and, if you use PKCS12 for certificate storage:

java -Djavax.net.ssl.keyStoreType=PKCS12 -Djavax.net.ssl.keyStore=/path/to/cert.p12 -Djavax.net.ssl.keyStorePassword=notapassword -Djavax.net.ssl.keyAlias=mycert -Djavax.net.ssl.trustStore=/path/to/truststore.p12 -Djavax.net.ssl.trustStoreType=PKCS12 -Djavax.net.ssl.trustStorePassword=notapassword



Wednesday, April 3, 2019

Re-framing Privacy

I really enjoy shows that guide me through various points of history, digging deeper into the day to day minutiae that your history classes in high school and college did not - and generally could not - show us.

You can also find nuggets of knowledge that can expand your understanding of the modern day.  This happened recently while watching Lucy Worsley's "History Of The Home" series - I think it was the "Bedroom" episode.

While discussing the lack of intimacy in the home, she said - and I'm paraphrasing - that privacy was the ability to choose who you share yourself with.  While obvious, and probably not uncommon, the quote rung in my head, echoing through the chambers.  Quick aside: it's worth watching that series just to hear about the origins of "making the bed" and "sleep tight".

It clarified my mistaken presumption of privacy as a passive "something" that you had; it was, in actuality, an action that was controlled by you.  It's what you do, not what you have.

Loss, or invasion, of our privacy, then, is the wrong way to think about the privacy problems we face today in technology, government, and society.   When I hear about it, privacy is presented as a secondary privilege, as if it were my home.

This leads me to believe that privacy should be compared to Free Speech.  In fact, privacy seems to be involved in exercising free speech: I choose to whom, of what, and how much of it, I speak.  Being stripped of privacy prevents me from effectively exercising my free speech rights.  Now the encroacher has not just read your journal, but has identified your personal expressions ("oh.  I see you like to dance as you get into the shower").

In this way, we need to think of this as being stripped of a freedom, rather than a loss or invasion.   Privacy is a choice of what you share, and how much of it you share: you choose when to stop sharing.

Unfortunately, the platforms we use are actively getting in the way of us exercising our privacy: "Look at what is going on out there.  Just take a peek.  Don't you want to say something about it.  Perhaps do something - we can help you do that something."  Kind of like having a kid around: you don't focus on their presence, you don't sense any danger of them being around while your acting out your day, then BAM!  Your kid just told your friend what you bought them for their birthday - or, worse, tells your girlfriend that you bought her an engagement ring (https://money.cnn.com/galleries/2010/technology/1012/gallery.5_data_breaches/3.html). 

Here's a thought: when you go into a store, you have a couple areas where you could exercise privacy (e.g. the bathroom).  Where can you do that on Facebook?  On any Google property?  Or even Amazon or Apple?  Every move you make goes directly into their internal data set, which they can parse whenever they want (in private too!)

This is definitely a topic to revisit soon.






Saturday, April 7, 2018

Agile Development: The Psychological Toll

I have a new hypothesis about Agile Software Development: it will cause a form of "Stockholm Syndrome" along with a healthy serving of Cognitive Dissonance in a significant number of developers.  Many of those developers become ardent evangelists that will not even consider alternatives - or even deviations - from their Agile Methodology.

I've been working on or with teams using Agile methodologies for over a decade.  It will be around for a while.  Yet, there has always been a problem with those teams...there was always a feeling of tension at the end of the sprint, even though it was not acted upon by the developers; it was internalized as shame, resentment, self-loathing, and even a mild depression. 

In the last 2-3 months, some on my team, and myself have waddled through such feelings.  I'm fortunate to have been able to work through them, and to emerge with a bit more understanding of why I've had such a bias against Agile.

With a team using Agile Methodologies, there is an overwhelming urgency to complete your selected story/task/bug/issue within the sprint cycle - "that's what the mantra's say!  That's how Agile works!  Everyone else who used Agile does it, and does it better than me/us!  I cannot be the slacker!"

And then it happens. 

You've gotten a few sprints closed.  You completed your tasks in time; some were close, but it was probably just the extra cycles spent on Reddit.   Staying an extra couple hours made sure I kept velocity at the right level. 
The next sprint was not so fortuitous.  You did not finish a task.  It was only 5 story points.  I'm an idiot.  Don't worry, I'll finish it up during the first few days of the next sprint, and still complete my typical number of points.   
I'll just work a few extra (unpaid or unreported) hours.


Now you've moved into the mindset of "keeping up with the team," rather than "solve a problem effectively for this domain/customer".

We're coming up on the end of the next sprint.  I've done a decent amount of work creating and coding a solution...but it feels wrong.  I could really use a day to review and maybe (no...no!) rework the solution.   
Nope.  No reworking.  What I have implements the fix/feature.  It'll fly through testing. 
But there's something wrong with it...I really need to know...
Ugh.  Close the ticket.  Now ignore it.

There it is.  The start - or perhaps relapse - into a state of Cognitive Dissonance: You believe deeply that you are an excellent Software Engineer; you create solutions for problems that exceed and amaze not only your customers, but your team and even yourself.  Yet you have just committed a solution to the baseline that you feel is...wrong...insufficient for the actual problem as a whole.

The show must go on, so you have to convince yourself that it must be the correct solution.  Agile is good, right.  It knows something you don't.  Master it and you will become the master.   
And if others follow it, they will see me as a master....  If others follow it, I am not wrong.




Wednesday, February 1, 2017

Security through technology?

Rambling post ahead...just an attempt to keep writing, and perhaps generate some focused topics for later posts.

I've noticed a rapid push for more and more security tools to be installed on everything from a toaster (e.g. IoT devices) to desktops, to VMs and physical servers.

This one monitors your activity (all of it?  Specific actions?); this one verifies that blob of bits is benign (or, really, not known to be malicious); this one prevents you from using parts of the system (USB ports, CD/DVD drive).

Luckily...it leaves about 1/4th of the system available for those actual value-generating activities (hopefully those activities are ok, even if monitored).  Hopefully we all over-purchased resources so we can handle the current - and the future - security tools that will be required on our systems.

It's not unexpected.  It's a lot easier to sell a product that offers (the illusion of) control, than to be constantly vigilant, or work with those around you to improve actual security.

And it's not that these products cannot be valuable in ones goal to maintain the security, integrity, and privacy of your data, environment, and, of course, your self.  They just tend to become a way to say you've done something to improve security, without actually proving it improved your - or your customers - security.

 Then, there's the issue of parsing all this security data that the tools generate...which requires resources that also need to be secured...



Sunday, January 29, 2017

The Benefit of Not Using the Team Development Environment

I'm not a "standard"-compliant type of developer.  At least, not when it comes to my development environments.

App server integrated with my IDE?  Nah.  I'll spin up a new VM with a packaging-compliant version, secured as close as possible to the deployed instance.

There's just something about creating your own deployment instance that helps me settle in.  Now I know why Tomcat and NGINX have trouble speaking SSL to each other.  

And it also uncovers trivial, yet easily hidden, deployment errors early.

Not long ago, we had an intern code up a small metrics collection component.  I was on a task that had me touching just about all parts of the code, which caused me to get quite a bit out of sync with the primary development branch.  Other developers were happily chugging along, pulling the new code, finishing tasks from the backlog.  Happy times.

I finally reached a stable point where I could start merging in the fast moving development line (trying to avoid really intense merges).  It all merged in without much trouble.  I had it building a new WAR, and felt pretty confident.

Then, I tried to run it in my (non-standard) environment.  Hmmm, what's this "ERROR" message?  Perhaps it can be ignored as an in-progress task?

"Can't create /metrics.tmp"

What?  Why is it trying to create a file in the root directory?  Time to dig!

Oh.  The new metrics code did not specify a directory - or offer a property to specify one - when creating a temporary file for collecting the metrics.  

How did it work?  Well, he was fully setup with the standard environment, which ran Tomcat from his Eclipse IDE (no, I deviate here too and use IntelliJ).  The file was created without issue since the default directory was his home directory, and Tomcat ran under his identity.

Even deploying it to a development test environment did not uncover the issue, as
The failure did not stop the overall startup process.  Everything ran, and Metrics was not a priority for testing a pre-MVP product.  It was still using a "development" environment setup, so the creation succeeded, though put the file alongside the application installation directory.

So, the next time you start following the "New Developer Environment Setup" docs, you may want to try a few deviations.  It may save some troubleshooting down the line.





Friday, November 4, 2016

JPA Follies


While I have been responsible for some - well, many - missteps when implementing a JPA (+Spring+Hibernate) layer for systems, this is one I'm pretty proudembarrased by.

TL;DR: Hibernate throws a "LazyInitializationException" within an @Transactional method, which turns out to be because I attempted to "pre-load" the caches during startup, only to have Lazy (Proxy) objects loaded and used during a subsequent (i.e. "Session closed") request.  Removing the pre-loading or using an EntityGraph to eagerly fetch the data corrected it.

I had 2 tasks to work on this week:

  1. Finish the transition to Spring Data JPA
  2. Minimize the latency for clients requesting data


#1 was going well, but I was still seeing > 60s "Time To First Byte" (TTFB) (#2), and that kept creeping up while adding data.  We're talking > 15,000 rows of multi-table (transformed) data being returned.

Fortunately, that high latency was only on the initial load.  Subsequent loads used the cached versions, so I was hitting < 7 second loads for 15,000+ rows of data.

"So...", I think, "why not pre-load the data!"  It makes sense, and I know there are potential pitfalls, but they should be fairly simple to work through if I search around the 'net.

Wow.  Mrs. Foot, meet Mr. Mouth.

I add in the call to load the data at startup, and then make my request, ready to praise myself for being a genius.

"LazyInitializationException - no Session" 

WTF?

I trace through the request, everything needing an @Transactional - and being called via a Spring Managed Component - is in place, and even the TransactionManager says the transaction is active right before accessing the proxied fields.

What.  The.  <Fill in favorite curse>?

I expected, and saw, the "proxy" class being used by the loaded data.  The methods extracting the fields are being called within the @Transactional (using the getter's and setter's as required; not direct field access).

It started percolating in my head that perhaps data was being loaded from another thread.  Yet, nothing else was running/processing at the time; this was the only thread loading and using data.

Well...I did have a code to load the data from the database after startup.  But...it loaded...

Oh...F...  It called the repository method, but never accessed any of the lazy loaded data, so...it cached the proxy object, closed the session, and started laughing knowing the problems that would happen (yeah, my code is a jerk).

Remove initialization code.  Rerun test.  No LazyInitializationException.  We're back to the long initial load, but that can wait for after getting everything working.





Wednesday, April 8, 2015

NGINX + Apache Tomcat - Certificate Proxying Adventures

I ran into a problem a while back: I identified NGINX as the best technology to reverse proxy our Apache Tomcat instance, but there was 1 particular requirement that had no real solution at the time:
- Tomcat must use X509 Client Authentication


E.g.:


Client --<X509-Cert>->[ (Proxy w/SSL) ]--<X509-Cert>-->[ (Tomcat) ]


Why was that a problem?  If it was Apache HTTPD, it would be easy: use proxy_ajp and off you go.  It's a well documented configuration.


But this was NGINX, and it did not have a recommended AJP bridge which would pass through the appropriate SSL headers.  HTTP was the preferred pass through.  Even more odd, I could find no actual documentation of this configuration, though it seems like it would be more common (which it might be, but only for internal Enterprise deployments, where X509 is more ubiquitous).


I could not use the backend SSL connection, since Tomcat would pull out the Proxy servers' X509 certificate for authentication, and the backend web application was not developed with proxying of client certificates in mind, so no special headers (e.g. Proxy-User-DN) configurations would work.  I could have dug into the source and added it, but 1) the work would have to go through a slew of process and vetting for something I was not technically supposed to be doing, and 2) WHY DOESN'T THIS WORK!?  IT SHOULD WORK!


Which brings us to today (well, last week actually).  


First, let's explore this problem a bit.  This will provide some insight into the why's and how's that make up the (fairly simple) solution.


What do we really want to do?  In my case, while a secure connection is desired, it did not have to traverse past the Reverse Proxy (NGINX) - the backend was secured behind private networks and other defenses, and performance was a concern.  No, we just needed to present a valid client certificate to the Tomcat server, and the Spring Security framework would do the rest.


With a direct SSL connection, easy; no thought really.  All the necessary information is there because Tomcat was performing the SSL handshake and verification, and then adding the certificate, DNs, cipher, etc... to the context.  


Okay, so just send over the Client Cert to Tomcat, right?  NGINX has a $ssl_client_certificate variable.  


proxy_set_header SSL_CLIENT_CERT $ssl_client_certificate;  


Done, and done...


Well, no, that does not work.  


One, the HTTP connector does not transfer any SSL information into your context, so the SSL_CLIENT_CERT is ignored.


Two, even once you find the magical configuration that will grab the SSL information, it fails because it does not see the NGINX $ssl_client_certificate value as a valid PEM encoded certificate.


The first part of the solution; Apache Tomcat *can* pull out the SSL context with just a simple addition to your server.xml configuration: just add the SSLValve to your Engine configuration


The information in the link provides all the necessary details for setting it up on Tomcat and HTTPD's side.  It even shows setting the SSL_CLIENT_CERT header.


But, translating that to NGINX was necessary, and not as direct as one may think.  


NGINX actually instantiates 2 SSL variables for the client cert: $ssl_client_certificate, and $ssl_client_raw_cert.  


Yet, adding those headers via proxy_set_header, and trying it with either of those client_cert variables failed when the SSLValve attempted to decode them.


Hmmm...looking at that value in NGINX, or copying it out and running it through openssl x509 produces valid results...


The problem, it turns out, was hiding both behind the scenes, and in front of my face.


Take a look at the first comment in the invoke function of the org.apache.catalina.valves.SSLValve source (line 72 in the link):
   
/* mod_header converts the '\n' into ' ' so we have to rebuild the client certificate */


And then proceeds to add back in a line feed at every space.  In this case, the SSL certificate now has duplicate ‘\n’ characters, so the format becomes invalid.


SSLValve implicitly assumes you will be using Apache HTTPD...and all the nuances that come with it.


Really, it's right out there.  Tomcat and HTTPD are both Apache, so that is where the focus is for integration.  While I don't fault them for the implementation, a bit of explicit warning is probably in order.  I may even try to update the SSLValve implementation at some point if I get time.


Still, this actually provides the final part of our solution: get NGINX to send the CLIENT_CERT the same way mod_header does.  


Oh...NGINX does not have any built-in way to modify the variable value's when assigning them to the header...


But!  It does have a LUA plugin, which comes precompiled when you use OpenResty.  Not exactly the way I had hoped it would fall out, but that's all it took.  Have LUA modify the $ssl_raw_client_cert to translate all '\n' to ' ', and SSLValve accepts it, and Spring properly authenticates the user.  The actual change required, in addition to the headers required by SSLValve, to the location section of the config is:


set_by_lua $client_cert "return ngx.var.ssl_client_raw_cert:gsub('\\n',' ')";
proxy_set_header        SSL_CLIENT_CERT         $client_cert;


Success!  Tomcat can use the client cert for authorization, and we are no longer confined to HTTPD + AJP when using SSL and reverse proxies.

There are probably other ways to fix this up, but this method appears to be a good, general solution.


Disney's Cloudy Vision - Part 1

Today's Disney has the idea backwards: Disney Parks should be imagined as places where a particular character/IP would live, not create ...