joeware - never stop exploring... :)

Information about joeware mixed with wild and crazy opinions...

12/3/2018

Generic High-Level Steps for DC Locator Functionality

by @ 11:17 pm. Filed under tech

0. If you are on Windows use the Windows LDAP Library and let it handle all of this for you.

1. Determine if your application has been configured to use a specific named Domain Controller, use it.

    a. For debugging purposes only

2. Determine if your application has been configured (hardcoded) to use a specific AD Site, do not Autodiscover site.

    a. For debugging purposes only

3. Determine if your client has been configured (hardcoded) to use a specific AD Site, do not Autodiscover site.

    a. For debugging purposes only

4. Determine if your client has "cached" a previously used AD Site.

    a. Used to improve efficiency especially between reboots, app restarts.

5. If you do not have a site from the following steps, determine the site (Autodiscover) the machine is in.

6. Retrieve a list of the domain controllers which are servicing the site (previously determined) for the domain we need a domain controller for.

7. [Optional but recommended] Find the PDC for the domain (or domains) of the domain controllers you are looking at and exclude it (them) from your list of domain controllers for consideration UNLESS that is (those are) the only domain controller available.

8. Validate the list of domain controllers to produce a final list of functioning validated domain controllers sorted by validation performance and DNS SRV record priority.

9. If no valid functioning domain controllers make it through steps 1-8 then you either need to select another site (hopefully “close” to the first site) to look in for domain controllers or you need to process steps 6-8 again but with a wider focus of any domain controllers in the entire domain.

10. Use domain controller(s) from list based on previous sorting and if using multiple LDAP connections distribute LDAP requests by DNS SRV record weight.

11. Repeat the process regularly (every few hours) or anytime you hit a failure to connect or get a result set or if you detect performance is dragging.

Coming Soon: Additional posts with details.

   joe

Rating 4.67 out of 5

12/2/2018

DNS SRV Records

by @ 12:31 am. Filed under tech

Active Directory location capability is all based on open standards based DNS SRV records which are designed to offer location capability for ANY services. The DNS SRV record RFC is RFC2782 which you can find at https://www.ietf.org/rfc/rfc2782.txt. There are two main components of the SRV process for domain controllers; registration and lookup.

First the domain controllers figure out what SRV records need to be registered for its services depending on various configurations in Active Directory and the registry of each domain controller. Applications aren’t involved in this process at all, they simply need to be able to lookup the results in DNS. The main issues that can occur on this side are DNS systems that aren’t properly allowing dynamic registrations or Active Directory admins misconfiguring sites and subnets and/or registry keys (directly or via GPO).

Second the clients that need to access Active Directory query DNS for the service’s SRV records in the specific sites that they need domain controllers OR look at the global set of service SRV records for all of Active Directory.

The service SRV records are significantly different from other well-known DNS record types such as A/HOST records or CNAME records in that there is a bunch of information packed into the records that allow for a fairly robust high availability service location system. They are exactly the same in that they can be dynamically updated and queried using open standards based DNS APIs.

SRV Record Components

Service SRV records can have multiple hosts and the following components are the publicly available pieces in DNS that make up each SRV record:

Record Name

  • The actual name of the service record in DNS that you specify to look up the record.
  • This is broken up of several components itself
  • _<SERVICE>.<PROTOCOL>.<NAME>
  • SERVICE: The service prefix for specified service such as LDAP.
  • Note that there is no requirement for a service name to be prefixed with an underscore but they usually are. All of the SRV records published by AD are prefixed with an underscore.
  • PROTOCOL: The protocol the record is for such as TCP or UDP.
  • NAME: The DNS Zone the record lives in such as domain.com
  • Ex: _ldap._tcp.k16tst.test.loc
  • Priority

    • The relative priority of the specified host for the record. The lower the value the more preferred the host. These values are for picking which hosts should be targeted first.
    • Ex: 0

    Weight

    • The relative weight of the specified host for the record. The higher the value the more preferred. These values are for balancing load between multiple hosts with the same priority.
    • Ex: 100

    Port

    • What port the service is available on for this specific host.
    • Ex: 389

    Svr HostName

    • The canonical hostname of the target of the record.
    • Ex: k16tst-dc1.k16tst.test.loc.

      In addition to the above, each record also has a TTL specified for it. This controls how fast the records age out and changes will get updated down through the hierarchy of DNS servers and client caches. The lower the value the more “dynamic” the records can be to offer up different options, etc. Additionally the lower the value the higher the DNS Lookup and replication load there is on the systems as well.

      Priority and Weight

      Most of the components of a service SRV record should, generally speaking, be self-explanatory. The priority and weight are a little different as their proper use may not be obvious.

      Each service record can have multiple SRV entries associated with it for each unique instance of the service. The priority and weight give hints on how the entries should be used.

      The priority is a numeric value where the lowest value has the great preference. Use all of the entries with a priority of 0 before all of the entries with a priority of 1 before all of the entries with a priority of 10 before all of the entries with a priority of 100, etc. If none of the instances with the lowest priority are responding, drop to the next lowest priority, etc.

      The weight is a numeric value where the highest value has the greatest preference. Unlike with priority all of the weights of the similar priority entries that are available are collected together and normalized to an overall value of 100% and that gives a ratio / percentage of how requests to each service instance should be balanced. Obviously, this should also be used dynamically in terms of which records are actually for available services at the time of use. This becomes more clear with the examples.

      Ex 1: Say you have three instances of the service each with a priority of 0 and weight of 100 then you should balance the requests across all three instances equally, 33.333% per instance. If one of those instances becomes unavailable then you should balance the requests across the two remaining instances at 50% per instance.

      Ex 2: Say you have three instances of the service each with a priority of 0 but two have a weight of 40 and one has a weight of 20 then out of every 10 requests 4 should go to service instance 1, 4 should go to service instance 2, and 2 should go to service instance 3. If service instance 2 with a weight of 40 becomes unavailable then for every 10 requests 7 should go to service instance 1 and 3 should go to service instance 2.

      Ex 3: Say you have three instances of the service each with priority of 0 and weight of 100 and one instance of the service with a priority of 1 and a weight of 100. Requests to the service should be split three ways between the instances with a priority of 0. If all three instances become unavailable and ONLY after all three instances become unavailable then all request should go to the service instance with a priority of 1.

      AD Service SRV Records

      The SRV records you will see for AD include

      • _ldap – LDAP service SRV records including normal LDAP and Global Catalog LDAP.
      • _gc – LDAP server records used only for Global Catalog LDAP.
      • _kerberos – Kerberos KDC service SRV records.
      • _kpasswd – Kerberos Password Change service SRV records.

      Here is an example of a complete set of records for the PDC of the root domain in a multi-domain forest with multiple sites. You can see the same information for any specific domain controller by looking at the C:\Windows\System32\Config\netlogon.dns file on each domain controller. In fact if you are missing AD SRV records in DNS this is the first place to look to troubleshoot.

      _ldap._tcp.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.Default-First-Site-Name._sites.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.pdc._msdcs.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.98fd1190-e167-4734-a585-7981238a135e.domains._msdcs.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      b306bddc-2945-4a7d-b7ce-0bc829c55c5a._msdcs.k16tst.test.loc. 600 IN CNAME K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.dc._msdcs.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.gc._msdcs.k16tst.test.loc. 600 IN SRV 0 100 3268 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.Default-First-Site-Name._sites.gc._msdcs.k16tst.test.loc. 600 IN SRV 0 100 3268 K16TST-DC1.k16tst.test.loc.
      _gc._tcp.k16tst.test.loc. 600 IN SRV 0 100 3268 K16TST-DC1.k16tst.test.loc.
      _gc._tcp.Default-First-Site-Name._sites.k16tst.test.loc. 600 IN SRV 0 100 3268 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.DomainDnsZones.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.Default-First-Site-Name._sites.DomainDnsZones.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.ForestDnsZones.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.Default-First-Site-Name._sites.ForestDnsZones.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.RODCSite._sites.gc._msdcs.k16tst.test.loc. 600 IN SRV 0 100 3268 K16TST-DC1.k16tst.test.loc.
      _gc._tcp.RODCSite._sites.k16tst.test.loc. 600 IN SRV 0 100 3268 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.RODCSite._sites.DomainDnsZones.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.RODCSite._sites.ForestDnsZones.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      _kerberos._tcp.dc._msdcs.k16tst.test.loc. 600 IN SRV 0 100 88 K16TST-DC1.k16tst.test.loc.
      _kerberos._tcp.Default-First-Site-Name._sites.dc._msdcs.k16tst.test.loc. 600 IN SRV 0 100 88 K16TST-DC1.k16tst.test.loc.
      _kerberos._tcp.k16tst.test.loc. 600 IN SRV 0 100 88 K16TST-DC1.k16tst.test.loc.
      _kerberos._tcp.Default-First-Site-Name._sites.k16tst.test.loc. 600 IN SRV 0 100 88 K16TST-DC1.k16tst.test.loc.
      _kerberos._udp.k16tst.test.loc. 600 IN SRV 0 100 88 K16TST-DC1.k16tst.test.loc.
      _kpasswd._tcp.k16tst.test.loc. 600 IN SRV 0 100 464 K16TST-DC1.k16tst.test.loc.
      _kpasswd._udp.k16tst.test.loc. 600 IN SRV 0 100 464 K16TST-DC1.k16tst.test.loc.
      k16tst.test.loc. 600 IN A 192.168.0.75
      gc._msdcs.k16tst.test.loc. 600 IN A 192.168.0.75
      DomainDnsZones.k16tst.test.loc. 600 IN A 192.168.0.75
      ForestDnsZones.k16tst.test.loc. 600 IN A 192.168.0.75
      _ldap._tcp.joenetlogontestsite._sites.gc._msdcs.k16tst.test.loc. 600 IN SRV 0 100 3268 K16TST-DC1.k16tst.test.loc.
      _gc._tcp.joenetlogontestsite._sites.k16tst.test.loc. 600 IN SRV 0 100 3268 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.joenetlogontestsite._sites.DomainDnsZones.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.joenetlogontestsite._sites.ForestDnsZones.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.joenetlogontestsite._sites.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      _kerberos._tcp.joenetlogontestsite._sites.dc._msdcs.k16tst.test.loc. 600 IN SRV 0 100 88 K16TST-DC1.k16tst.test.loc.
      _ldap._tcp.joenetlogontestsite._sites.dc._msdcs.k16tst.test.loc. 600 IN SRV 0 100 389 K16TST-DC1.k16tst.test.loc.
      _kerberos._tcp.joenetlogontestsite._sites.k16tst.test.loc. 600 IN SRV 0 100 88 K16TST-DC1.k16tst.test.loc.
      

      One Last Thing…

      I love this model. I think it is extremely intelligent and useful. Microsoft was brilliant for their involvement in SRV records and use of it in this way. It takes you out of the hole you can be in by depending on any given machine to always be available whether that machine is a domain controller, a network switch, a Virtual IP / Load Balancer, whatever. This is an inexpensive globally redundant mechanism using functionality available in every network that when it is used properly is very useful and just outright awesome.

      That being said I am also very disappointed because Microsoft didn’t use it for LDAPS or Global Catalog LDAPS records nor have an option to use it for ADLDS or even for the ADWS service that now runs on Domain Controllers and ADLDS servers for the AD PowerShell Cmdlets. Come on Microsoft. On the positive side, because it is all based on open standards, you can (and I have) write scripts/tools to add/remove additional records as you see fit.

      If you haven’t checked it out before, check out DNSSrvRec which is pretty much a quick and dirty tool that I wrote over a decade ago that allows you to quickly add and/or delete SRV records. You can find it at https://www.joeware.net/freetools/tools/dnssrvrec. It is so QnD there is a super obvious typo bug that is seen as soon as you run it, but don’t worry, it doesn’t impact its functionality. You will note that the usage examples illustrate how to add _LDAPS records but I use this to this day for troubleshooting and temporally removing or fixing the normal AD SRV records when things are broken.

         joe

      Rating 4.82 out of 5

      11/26/2018

      Logging in Applications (Particularly LDAP Applications…)

      by @ 11:07 pm. Filed under tech

      While working on some posts about writing code for leveraging Active Directory I realized that a very weak point I have run into with many (perhaps most) apps is the logging, particularly for use in troubleshooting and/or debugging of issues. I don’t care how good of a coder you are (or think you are), your code will eventually be smack dab in the middle of a problem and someone is going to have to troubleshoot it. So just assume that that is going to happen right from the jump and do something about it as you write your very first lines of code and you won’t be called “a moron coder” when someone runs into the issue that needs that debugging… NB: I myself have been called a moron coder, often by myself, sometimes multiple times in a single sitting.

      Personally, I have always been a fan of plain text file logging because it is simple to implement and simple to use requiring no special viewers, is easily searchable, etc but in the end, ANY logging is better than no logging. If instead of text logging you would rather use binary or XML log entries and instead of a text file you would rather send logging events to SYSLOG or the Windows Event Log or Windows Debug Stream or a sweet MariaDB SQL Server or all of them or something else entirely please do so if it gets you to implement logging. There is a caveat in that the “something else” shouldn’t be console logging unless your console is a teletype of some sort because otherwise the logs will just scroll off the screen and never be seen again. That makes it too easy to lose salient details that way. Actually, on second thought, even if you do have a teletype console available don’t use console logging as your sole logging method. I used to have to look through reams of paper with console logging entries and I regularly missed things in it.

      I realize that logging can impact performance so if you want to allow the consumer to control the impact they are experiencing but still allow them to get data that is useful for troubleshooting set up logging levels that can be configured such that the higher the logging level that is configured the more verbose the logging becomes. Visualize a sliding dial, less logging/higher performance to super verbose logging/lower performance. There should probably be at least 3-5 levels of verbosity from normal regular running and tracking high level events to full on debugging tracking everything that is happening. If you can figure out a way to do it, you could even log the performance hit currently being experienced because the level of logging. That would require some baselining in any given environment but could be done such that perhaps you occasionally run a “transaction”, whatever that is for your app, raw without logging and then run the same “transaction” with the logging and see what the delta is. The more you do that the closer you can be to defining the performance hit involved.

      For AD interaction logging in particular, when configured for the most verbose logging you should log entries for the whole process to select the domain controller such as what DC was used to figure out the bootstrap information, anything read in from config files/registry/etc, DNS results, possible DCs, selected DCs and some of the information that is used to help you determine which DC to use (like performance, capabilities of the DCs, etc). You want to track all connection attempts, the parameters, controls, etc and their results (but don’t put in clear text passwords!) and performance of same. All queries again including parameters, controls, etc and the results and ta da… Performance of same. All updates again including parameters, controls, etc and the results and performance of same. You really want to include performance information (how long it takes to do various things – response time for connections and queries are usually quite useful), etc because some of the most common issues as you scale up environments is that something is “running slow” and you don’t want to make people have to break their arms trying to find convoluted ways to determine performance issues. If you are using multiple threads/processes then they should be uniquely identified in the logs as well so you can track the various streams that are likely occurring simultaneously. Oh, time stamps, lots and lots of time stamps. Perhaps the machine name the code is actually running on especially if you could have a pool of application servers running and want to consolidate the logging somehow. Pretty much ANYTHING that would help you troubleshoot issues with connecting and/or returning information and/or updating information in Active Directory or any other LDAP directory.

      Also, and this is a bit of Psych 101, tie in non-configurable logging to the most verbose performance impacting logging levels that indicates what debug/hardcoding flags are set. Track everything put into the code to use for troubleshooting or debugging that someone may enable and actually forget about and leave running for long periods of time, this could include verbose debug logging levels being configured, hardcoding, etc. I see nothing wrong in stamping the log every 15 minutes, for example, when someone has hardcoded a specific machine or other resource to use when the application supports dynamic resource detection. No one wants to see errors popping up in their logs over and over again so when you do that for the debug configuration items you make it far more likely that someone will notice what is happening and will get it corrected. If there is a good reason for the debug logging then the admin/integrator/dev will understand and be able to properly look past those alert entries in the log for the limited period of time they are debugging and once they are done it will hopefully be so noisy and painful they will fix it. You can also put in regular warning reminders in the logs for perhaps some less than optimal configurations, especially configurations that impact security or performance. For example, a log entry once an hour saying “Hey goofball, you are sending passwords across an LDAP connection with no encryption, start using LDAPS or STARTTLS or LDAP Signing/Sealing!” That little bit could help keep your application from being used insecurely and jeopardizing a company.

      This should be able to go unsaid but unfortunately I absolutely know better as I have seen regular occurrences of this rule being broken year over year but as I specifically called out above… Do NOT output clear text security secrets like passwords into log files. If you want to put passwords into the log files then a few basic rules:

      1. It should only be secrets/passwords that you already know and control. I.E. Application ID passwords for YOUR application. DO NOT EVER output in any way shape or form the passwords of anyone using your application. You can and should log the IDs, but don’t even think about logging the passwords or even an encrypted form of the passwords. The user passwords should never be available outside of the memory space of the currently running application and it should be in memory only for a very short period of time as well, milliseconds at most, the time taken to authenticate the user.

      2. The secrets that are ok for you to log you should encrypt in some way shape or form so a casual glance cannot pick up on them. And when I say encrypt I mean actually encrypt, don’t do something stupid like MIME encode them such that anyone who can grab the text can put it into one of a thousand different pages on the internet to revert back to clear text. Better than encryption is to use a hash. It still isn’t full proof but you can easily create the hash of your password that you think it is and then compare to the hash that is in the log to make sure it is the same.

      3. If you ever gain knowledge or have a feeling that your secrets have been exposed to someone they shouldn’t be exposed to immediately change those secrets. Secrets like application / process passwords should be changed frequently, at least anytime someone leaves a team that knows the password and at a minimum annually though more preferably monthly, weekly, or even daily. I have even seen applications that changed their own passwords every 8-12 hours. That last one may be a bit excessive but it really depends on how critical or sensitive the information is that the secrets/passwords are protecting.

      To wrap up logging… Produce good logging, you will thank yourself and operations will thank you as well when they are forced to use it. Be clear, be concise, be complete. Think about what information someone may need from your application when it is 2AM and it isn’t doing what it is supposed to be doing and someone needs to figure out what is wrong and they have called you to ask you to fix it. Things WILL go wrong, period. No one writes perfect code, no systems run 100% perfectly 100% of the time. It may not be your system or code that is failing but your code could still be blamed. You can NOT depend on other people, especially the people who support the underlying infrastructure that you use, to be able to tell you why your application is failing, it isn’t their job and very often, if the environment has any serious scale, they have no real capability to help you. It is all on you.

          joe

      Rating 4.71 out of 5

      11/25/2018

      Coming Attractions: How to Find Domain Controllers for Fun and Profit (and your various LDAP operations…)

      by @ 11:10 pm. Filed under tech

      I previously wrote that many applications that are using Active Directory aren’t meeting even the lowest bar for proper Active Directory integration. That lowest bar being the ability to properly find an Active Directory domain controller to use for LDAP operations. This is something that regularly plagues me and it is ridiculous that it is still a problem.

      If someone can’t properly find a domain controller is it realistic to expect them to get anything else related to Active Directory truly right? Finding a domain controller is literally step one in "How to query AD with LDAP". If a developer is already bored with and doesn’t properly develop step one there isn’t much hope, IMO, for anything that follows. If a company purposely makes the decision to not find domain controllers properly and still claim “Active Directory Integrated” I would (and do when I find them) consider the company untrustworthy for at least anything related to Active Directory and I look at everything else with a jaundiced eye as well.

      So what do they do instead of properly locating domain controllers? *A lot* of vendors and *a lot* of developers simply write the code to specify an IP address or an FQDN of a host or an FQDN of the domain name in the configuration and then they hope for the best. They may add "load balancing" or "redundancy" by adding additional IP addresses or FQDNs or possibly not… Usually not. This truly isn’t acceptable for finding Active Directory domain controllers unless you want an application that is susceptible to (read: guaranteed to have) outages. These same vendors and developers (or the customer application folks that depend on the applications) get mad when their apps fail because of these bad decisions and then they often want to blame the AD folks. Further they go on to say it is up to the AD Admins to find a solution and fix the developers’ and vendors’ inability to write their applications properly. Seriously… They come at the AD admins saying they should put their domain controllers behind virtual IPs / load balancers, etc. The answer should be “No, do your job properly and go fix your poorly written application and/or make sure you know what the product you are buying is actually capable of and only reward companies that do things properly.” You will thank me in the end when you DON’T have to keep crutching their failures.

      I would really like to more specifically define the term "a lot" as it is an inadequate description but I simply cannot do it. It stands in for some number that cannot be known but I can state unequivocally that industrywide it is massive and it includes apps written in the back rooms of companies for their internal use as well as in the coding pits of some very large, very well-known software vendors that you would expect, yes expect but cannot guarantee, to know better who are showing their disrespect for you by making you pay for their poorly/incorrectly written product. The sales guys will tell you "Yes our application is compatible with AD" yet by that they just simply mean that it can perform basic LDAP operations and they know that Active Directory can speak LDAP.

      There is no reason NOT to do this initial step properly other than vendors expecting customers will pay for what they build no matter how poorly it works. Active Directory is old enough to vote and is not the only LDAP Directory that has similar DNS SRV record based intelligent service location capabilities available based on RFC 2782. If you are a developer and have been writing LDAP code PRIOR to the year 2000 then perhaps you have an excuse not to do this correctly but… no, I’m lying, you have no excuse at this point. You are lazy and are content with half-ass code if you don’t think it should be done properly especially now nearly 20 years later.

      What will follow on the blog are a series of posts that describe in detail various aspects of the DC Locator process (and other AD dev related things) that applications can leverage to properly find domain controllers and be properly redundant. There will be a post on the generic high level process, a post on pure Windows doing it “The Easy Way”™, a post on pure Windows doing it a little more long and draw out, and a post for generic mechanisms that will work on any OS (including Windows) that has DNS resolver lookup and LDAP client network functionality.

      Stay tuned…

          joe

      Rating 4.57 out of 5

      Yes yes I know I know…

      by @ 11:02 pm. Filed under general

      A while back I said, hey got a new job, will be spending more time posting stuff and learning news things and sharing that new learning. It started going in that direction but then my time started getting eaten up more and more with work and issues with people, issues with tech, issues with direction, issues with technical debt, and issues of just not enough time in the day to get everything done that I wanted and needed to get done.

      It isn’t that I haven’t been able to work on stuff outside of work, it is just that it is sometimes tough to get more than an hour here or there[1] because I have to often spend SOOOO much time on work depending on what is going on. And then when I am not working I have to spend SOOO much time trying to catch up to what I was supposed to be doing on the personal side. And then after all work and personal responsibilities comes my joeware stuff which in the end, really is for joy, fun and stress/creativity release until such a time that I can find a way to turn this into something that makes me real money.

      One big problem of reaching that place where what I do for fun pays for my life is that I really like to help people AND I am not a business man. If I were starving perhaps I would be more of a business man and see the angles to make the money and properly monetize my creativity and intellectual property and capability. That being said we are talking about someone who wrote an article to submit to Windows IT Pro magazine ages ago to make the $50 or whatever it was for a basic how-to and plus to get it out out there in front of so many Windows Server Admins (at that time Windows IT Pro Mag was the go to for Windows Server Admins) and then they turned around and published it in a special security newsletter that they had that cost even more money and had a very limited audience which absolutely pissed me off because then I knew it wasn’t going to help all of the people that it was intended to help. I don’t even recall what that specific thing was about but it absolutely ended my days of writing for magazines. It was entirely my fault of course, I didn’t fully understand their control over my content and I believed (or perhaps wanted to believe) that they were just as interested in enlightening the Windows Admins of the world to Security as I was to make the industry overall better. They kind of did, but they also were business people who were looking to make money and knew that what I wrote was something that aligned with the type of content that people who had and were willing to spend more money on Security were paying more money on for in the first place. Exactly the kind of thing I am not good at. If I owned a drug company I would probably end up selling the drugs below cost if not actually giving them away and then getting a second job to pay for it all. Just like my “real job” pays for all of the stuff I do and have done for the Windows community for the last 20 years.

      All that to say that I have done a horrible job with joeware stuff in any public manner lately but I do have some posts coming that have been slowly getting pieced together over the last number of months. Hopefully it will have been worth the wait. Smile

      Also I am still working on updates to AdMod which will really beef up its power some more but I have to be VERY careful with that code because it is so incredibly dangerous. Unlike AdFind where I can quickly toss things into the code AdMod actually makes changes and I try very hard to make sure that the changes it makes are actually the changes that were intended.

      Aside from that I have an easy 150 bugs and DCRs to put into AdFind now from things that I have found in my “new”[2] full time job. Also I have a couple of friends who I work with who send me enhancement suggestions as well. One in particular I have to point out because he told me when I first met him that he knew I didn’t like PowerShell and he would have me converted by the end of the first year of working with him… I was like ok dude, others have tried and failed but ok cool. He now uses AdFind daily and uses AdMod more and more. I didn’t try to convert him. It is what it is.

      joe

      P.S. Do people read blogs anymore? Or is it all supposed to be Insta, Tweets, podcasts, and snapshats now a days?

      [1] An hour here or there is a lot of time joe, wtf is your issue? Well it is and it isn’t. The quality I try to put into what I share with others usually takes a lot more than an hour to produce as I try to look at it from a variety of angles. That is why so much of what I have done has been so flexible and so far reaching. Anyone can just blather on, we all have seen it, I try not to be one of those people. We all have very limited time and I like to think that when you spend your valuable time to read something I have written, it ends up being worth the investment.

      [2] Two years the first week of December wow. It simultaneously feels like it was 90 days and 90 years at once.

      Rating 4.63 out of 5

      11/8/2018

      If you are looking for any custom artwork for the holidays or otherwise…

      by @ 2:41 pm. Filed under general

      The pictures do not do this justice. In person it looks 3D and made me gasp when I took it out of the box. My sister the official artist as in she makes art day in and day out painted it.

      She is amazing and can turn any picture or multiple pictures into just about any artwork you want from rocks to canvas to obviously ornaments.

      She is taking and filling orders for the holidays right now.

      One of the things that a ton of people love are her baby deer stones and pet memorial stones. The deer sit in the corner looking like they are real but asleep. The memorial stones are enough to make your heart stop and think your beloved pet has come back.

      Don’t order anything unless you are ready to be a repeat customer because that is almost certainly going to happen as you want more and more.

      http://www.trendyartist.com/

      https://www.facebook.com/TrendyArtist/

      Instagram: @artistshannonnelson

      Image may contain: dog

      Image may contain: dog

      Image may contain: dog

      Rating 4.67 out of 5

      10/15/2018

      Digital Wallet

      by @ 11:36 pm. Filed under general

      If you intend to sign up for a digital wallet anytime in the future consider using coinbase (I chose it[1]) and also consider using the following link for joining. If you use the link and buy at least $100 USD in Bitcoin you will get $10 USD (and so will I).

      http://link.joeware.org/coinbase

      Feel free to share the link yourself. Open-mouthed smile 

      I also picked up some LiteCoin and Etherium Classic.

         joe

      [1] I looked around for a while before I chose CoinBase. It looks like a solid choice with a decent fee structure.

      Rating 4.50 out of 5

      9/5/2018

      Chrome and the “Not secure” Message in the address bar Part III

      by @ 11:10 pm. Filed under general

      I think I have sorted out the issues with the downloads and have switched the www.joeware.net portion of the site to use https: scheme by default.

      If you have any issues downloading when you didn’t before, please let me know at support@joeware.net 

          joe

      Rating 4.50 out of 5

      Chrome and the “Not secure” Message in the address bar Part Deux

      by @ 8:08 pm. Filed under general

      Slowly getting there…

      For the blog, it should always force to a scheme of https://.

      The main website will still come up as http:// by default. You can specify the scheme https://www.joeware.net if you are concerned. Trying to force it with .htaccess like I have done with the blog is blowing up the downloads for some reason so I need to troubleshoot that.

          joe

      Rating 4.50 out of 5

      8/14/2018

      Chrome and the “Not secure” Message in the address bar

      by @ 5:03 am. Filed under general

      I have received some emails asking why this blog is considered insecure by Chrome.

      This is a new configuration from the latest version of Chrome to mark any website that isn’t using HTTPS: / SSL Encryption as insecure. Nothing has changed from my end, the site isn’t suddenly insecure. It is the same as it has always been, now Chrome is trying to help people more clearly realize they shouldn’t feed credit card numbers etc into pages that aren’t encrypted. Sites that are just displaying information such as my blog and website do not ask for anything critical from you so it isn’t really all that bad with the exception that your provider could insert HTML into the page if they like such as ads or a notice that you are going to go over your bandwidth for the month or something.

      Anyway, I am working with my provider to get certs in place so I can provide HTTPS: so people will feel better when it doesn’t say “Not secure”.

         joe

      Rating 4.00 out of 5

      [joeware – never stop exploring… :) is proudly powered by WordPress.]