Rambling post ahead...just an attempt to keep writing, and perhaps generate some focused topics for later posts.
I've noticed a rapid push for more and more security tools to be installed on everything from a toaster (e.g. IoT devices) to desktops, to VMs and physical servers.
This one monitors your activity (all of it? Specific actions?); this one verifies that blob of bits is benign (or, really, not known to be malicious); this one prevents you from using parts of the system (USB ports, CD/DVD drive).
Luckily...it leaves about 1/4th of the system available for those actual value-generating activities (hopefully those activities are ok, even if monitored). Hopefully we all over-purchased resources so we can handle the current - and the future - security tools that will be required on our systems.
It's not unexpected. It's a lot easier to sell a product that offers (the illusion of) control, than to be constantly vigilant, or work with those around you to improve actual security.
And it's not that these products cannot be valuable in ones goal to maintain the security, integrity, and privacy of your data, environment, and, of course, your self. They just tend to become a way to say you've done something to improve security, without actually proving it improved your - or your customers - security.
Then, there's the issue of parsing all this security data that the tools generate...which requires resources that also need to be secured...
Ramblings on various curiosities I stumble upon. Mainly tech-related items, but fitness and nutrition are high up there too. With any luck, a perplexing issue will be solved, and perhaps save another soul a long, frustrating trip back and forth around the world.
Wednesday, February 1, 2017
Sunday, January 29, 2017
The Benefit of Not Using the Team Development Environment
I'm not a "standard"-compliant type of developer. At least, not when it comes to my development environments.
App server integrated with my IDE? Nah. I'll spin up a new VM with a packaging-compliant version, secured as close as possible to the deployed instance.
There's just something about creating your own deployment instance that helps me settle in. Now I know why Tomcat and NGINX have trouble speaking SSL to each other.
And it also uncovers trivial, yet easily hidden, deployment errors early.
Not long ago, we had an intern code up a small metrics collection component. I was on a task that had me touching just about all parts of the code, which caused me to get quite a bit out of sync with the primary development branch. Other developers were happily chugging along, pulling the new code, finishing tasks from the backlog. Happy times.
I finally reached a stable point where I could start merging in the fast moving development line (trying to avoid really intense merges). It all merged in without much trouble. I had it building a new WAR, and felt pretty confident.
Then, I tried to run it in my (non-standard) environment. Hmmm, what's this "ERROR" message? Perhaps it can be ignored as an in-progress task?
"Can't create /metrics.tmp"
What? Why is it trying to create a file in the root directory? Time to dig!
Oh. The new metrics code did not specify a directory - or offer a property to specify one - when creating a temporary file for collecting the metrics.
How did it work? Well, he was fully setup with the standard environment, which ran Tomcat from his Eclipse IDE (no, I deviate here too and use IntelliJ). The file was created without issue since the default directory was his home directory, and Tomcat ran under his identity.
Even deploying it to a development test environment did not uncover the issue, as
The failure did not stop the overall startup process. Everything ran, and Metrics was not a priority for testing a pre-MVP product. It was still using a "development" environment setup, so the creation succeeded, though put the file alongside the application installation directory.
So, the next time you start following the "New Developer Environment Setup" docs, you may want to try a few deviations. It may save some troubleshooting down the line.
So, the next time you start following the "New Developer Environment Setup" docs, you may want to try a few deviations. It may save some troubleshooting down the line.
Friday, November 4, 2016
JPA Follies
While I have been responsible for some - well, many - missteps when implementing a JPA (+Spring+Hibernate) layer for systems, this is one I'm pretty
TL;DR: Hibernate throws a "LazyInitializationException" within an @Transactional method, which turns out to be because I attempted to "pre-load" the caches during startup, only to have Lazy (Proxy) objects loaded and used during a subsequent (i.e. "Session closed") request. Removing the pre-loading or using an EntityGraph to eagerly fetch the data corrected it.
I had 2 tasks to work on this week:
- Finish the transition to Spring Data JPA
- Minimize the latency for clients requesting data
#1 was going well, but I was still seeing > 60s "Time To First Byte" (TTFB) (#2), and that kept creeping up while adding data. We're talking > 15,000 rows of multi-table (transformed) data being returned.
Fortunately, that high latency was only on the initial load. Subsequent loads used the cached versions, so I was hitting < 7 second loads for 15,000+ rows of data.
"So...", I think, "why not pre-load the data!" It makes sense, and I know there are potential pitfalls, but they should be fairly simple to work through if I search around the 'net.
Wow. Mrs. Foot, meet Mr. Mouth.
I add in the call to load the data at startup, and then make my request, ready to praise myself for being a genius.
"LazyInitializationException - no Session"
WTF?
I trace through the request, everything needing an @Transactional - and being called via a Spring Managed Component - is in place, and even the TransactionManager says the transaction is active right before accessing the proxied fields.
What. The. <Fill in favorite curse>?
I expected, and saw, the "proxy" class being used by the loaded data. The methods extracting the fields are being called within the @Transactional (using the getter's and setter's as required; not direct field access).
It started percolating in my head that perhaps data was being loaded from another thread. Yet, nothing else was running/processing at the time; this was the only thread loading and using data.
Well...I did have a code to load the data from the database after startup. But...it loaded...
Oh...F... It called the repository method, but never accessed any of the lazy loaded data, so...it cached the proxy object, closed the session, and started laughing knowing the problems that would happen (yeah, my code is a jerk).
Remove initialization code. Rerun test. No LazyInitializationException. We're back to the long initial load, but that can wait for after getting everything working.
Wednesday, April 8, 2015
NGINX + Apache Tomcat - Certificate Proxying Adventures
I ran into a problem a while back: I identified NGINX as the best technology to reverse proxy our Apache Tomcat instance, but there was 1 particular requirement that had no real solution at the time:
- Tomcat must use X509 Client Authentication
E.g.:
Why was that a problem? If it was Apache HTTPD, it would be easy: use proxy_ajp and off you go. It's a well documented configuration.
But this was NGINX, and it did not have a recommended AJP bridge which would pass through the appropriate SSL headers. HTTP was the preferred pass through. Even more odd, I could find no actual documentation of this configuration, though it seems like it would be more common (which it might be, but only for internal Enterprise deployments, where X509 is more ubiquitous).
I could not use the backend SSL connection, since Tomcat would pull out the Proxy servers' X509 certificate for authentication, and the backend web application was not developed with proxying of client certificates in mind, so no special headers (e.g. Proxy-User-DN) configurations would work. I could have dug into the source and added it, but 1) the work would have to go through a slew of process and vetting for something I was not technically supposed to be doing, and 2) WHY DOESN'T THIS WORK!? IT SHOULD WORK!
Which brings us to today (well, last week actually).
First, let's explore this problem a bit. This will provide some insight into the why's and how's that make up the (fairly simple) solution.
What do we really want to do? In my case, while a secure connection is desired, it did not have to traverse past the Reverse Proxy (NGINX) - the backend was secured behind private networks and other defenses, and performance was a concern. No, we just needed to present a valid client certificate to the Tomcat server, and the Spring Security framework would do the rest.
With a direct SSL connection, easy; no thought really. All the necessary information is there because Tomcat was performing the SSL handshake and verification, and then adding the certificate, DNs, cipher, etc... to the context.
Okay, so just send over the Client Cert to Tomcat, right? NGINX has a $ssl_client_certificate variable.
proxy_set_header SSL_CLIENT_CERT $ssl_client_certificate;
Done, and done...
Well, no, that does not work.
One, the HTTP connector does not transfer any SSL information into your context, so the SSL_CLIENT_CERT is ignored.
Two, even once you find the magical configuration that will grab the SSL information, it fails because it does not see the NGINX $ssl_client_certificate value as a valid PEM encoded certificate.
The first part of the solution; Apache Tomcat *can* pull out the SSL context with just a simple addition to your server.xml configuration: just add the SSLValve to your Engine configuration
The information in the link provides all the necessary details for setting it up on Tomcat and HTTPD's side. It even shows setting the SSL_CLIENT_CERT header.
But, translating that to NGINX was necessary, and not as direct as one may think.
NGINX actually instantiates 2 SSL variables for the client cert: $ssl_client_certificate, and $ssl_client_raw_cert.
Yet, adding those headers via proxy_set_header, and trying it with either of those client_cert variables failed when the SSLValve attempted to decode them.
Hmmm...looking at that value in NGINX, or copying it out and running it through openssl x509 produces valid results...
The problem, it turns out, was hiding both behind the scenes, and in front of my face.
Take a look at the first comment in the invoke function of the org.apache.catalina.valves.SSLValve source (line 72 in the link):
/* mod_header converts the '\n' into ' ' so we have to rebuild the client certificate */
And then proceeds to add back in a line feed at every space. In this case, the SSL certificate now has duplicate ‘\n’ characters, so the format becomes invalid.
SSLValve implicitly assumes you will be using Apache HTTPD...and all the nuances that come with it.
Really, it's right out there. Tomcat and HTTPD are both Apache, so that is where the focus is for integration. While I don't fault them for the implementation, a bit of explicit warning is probably in order. I may even try to update the SSLValve implementation at some point if I get time.
Still, this actually provides the final part of our solution: get NGINX to send the CLIENT_CERT the same way mod_header does.
Oh...NGINX does not have any built-in way to modify the variable value's when assigning them to the header...
But! It does have a LUA plugin, which comes precompiled when you use OpenResty. Not exactly the way I had hoped it would fall out, but that's all it took. Have LUA modify the $ssl_raw_client_cert to translate all '\n' to ' ', and SSLValve accepts it, and Spring properly authenticates the user. The actual change required, in addition to the headers required by SSLValve, to the location section of the config is:
set_by_lua $client_cert "return ngx.var.ssl_client_raw_cert:gsub('\\n',' ')";
proxy_set_header SSL_CLIENT_CERT $client_cert;
Success! Tomcat can use the client cert for authorization, and we are no longer confined to HTTPD + AJP when using SSL and reverse proxies.
There are probably other ways to fix this up, but this method appears to be a good, general solution.
Saturday, March 28, 2015
The Beginning
The search for a blogging platform can be frustrating for a tech-head:
- "I want to host and manage my own!" (The control factor)
- "Maybe I'll make my own!" (Not-Invented-Here factor)
- "I want it now!" (The 'play with a new thing' factor)
- "Maybe I'll do this later..." (Fear of failing factor)
Even worse, I owned a domain (perplexedmind.com) for a good 5 years without putting it to use. It was lost immediately after my contract elapsed; I tried to get it again, but it was no longer available. Oh well. Try another variant (perplexedminds.com).
So, here I am, trying out a writing platform (quite nice so far!), and ignoring my intended goal: start a blog.
Still, I'm not too upset. The best thing about doing research, is all the tangents that you take. You stumble upon the most fascinating ideas, technologies, and, well...other things - things you may have never known existed, like "Draft," or how showering every day is causing more damage to your body than you know.
Annnnddddd...in the end, I circle back around, and click on the "Create Blog" button on Google, where about 87.6% of my online life resides (including my domain).
Now that I think about it, that "one-click. Done." deal has really started defining me. Damn. I'm one lazy guy.
Until next time...
Subscribe to:
Comments (Atom)
Disney's Cloudy Vision - Part 1
Today's Disney has the idea backwards: Disney Parks should be imagined as places where a particular character/IP would live, not create ...
-
As the Agile philosophy picked up steam (and started generating consulting profits), us developers were introduced to the concept of "...
-
I really enjoy shows that guide me through various points of history, digging deeper into the day to day minutiae that your history classes ...
-
I ran into a problem a while back: I identified NGINX as the best technology to reverse proxy our Apache Tomcat instance, but there was 1 p...