MASTER=user1@node1
NODES=(user2@node2 user3@node3 user4@node4)
# (other parameters unchanged)
If you already have a live Kubernetes (K8s) or K8s-compatible (e.g. OpenShift) cluster, you can directly deploy an IPS cluster on top of it within a matter of minutes.
If you do not have a cluster of your own, you can simply utilize a cloud provider (e.g. AWS)
to bring up your own K8s cluster within minutes, ready for an IPS deployment right away.
For example, we have had great success with kubeadm
on baremetal as well as AWS instances;
we have composed a small guide on getting a cluster running with kubeadm
on Ubuntu
(compatible with AWS EC2 as well!), which is also included inside our distribution bundle.
You will have a fully-fledged IPS cluster on top of your existing infrastructure.
You will be able to deploy and monitor as many ESB instances as your infrastructure permits, and centrally manage and monitor their operation.
The native IPS distribution has a remote installer, wherein the installer runs outside the K8s cluster (accessing the master and worker nodes via SSH for pushing the necessary files and configurations). You can run it from anywhere (even from within the cluster itself), as long as the following requirements are fulfilled:
A (local or remote) cluster running K8s 1.2 or higher, and 1.8.6 or lower
(check out the official kubeadm
guide or
our tailored version if you want to create a brand new cluster)
A bash
(or bash
-compatible) console where you can run the installer (usually any modern Linux system would work,
as would the Windows Subsystem for Linux;
the native Mac OS X shell may not be fully compatible, and we have not verified the installer on it yet)
SSH access from the above machine/console to all nodes (master and workers) of the cluster, on a sudoer account; i.e. one which can elevate itself for root access (available by default under most cloud providers, including AWS)
Internet connectivity (for fetching the necessary Docker images, third-party libraries and license material) on the worker nodes and the system (console) where you are invoking the installer
A client key obtained via any of the AdroitLogic product download pages (the IPS for Linux download page or otherwise)
The native installer may not yet be listed on the IPS product page, but you can still try it out by going through the download process (using this download link instead of the default distribution download link sent out in the confirmation email). |
If you do not have a Docker Hub account (as requested by the IPS product download form) and do not wish (or do not have the capacity) to create one, you can simply go through the UltraESB-X product download form and use the obtained client key for trying out this IPS installer. |
If you have already tried out the UltraESB-X, UltraStudio or the standalone IPS installer, you can reuse the same client key that you obtained previously; you will be granted a fresh evaluation period to try out IPS even if the other product subscription(s) have already expired. |
The client key is usually delivered to you via email once you
request an evaluation,
as an attachment (client.key.properties ) of the evaluation confirmation email.
If you cannot find the confirmation email delivered under the email account that you provided in the evaluation form,
kindly check your spam/junk and trash mailboxes as well;
if you still cannot find it, please write to us at license@adroitlogic.com
so that we can resend your client key to you.
|
Download the IPS installer bundle
(~ 18 KB), and extract it to a desired location.
(Since it is only a small set of deployment scripts and configuration files,
there is no requirement to "install" it under an installation directory (e.g. /opt/
) per se.)
Place the client.key.properties
file, inside the node
directory of the extracted installer.
(The directory should now contain three files: client.key.properties
, license.key.properties
and license.conf.properties
.)
Edit config.sh
in the installer root, providing the following parameters:
|
SSH login for the master node, in |
|
|
list of SSH login for the worker nodes, in |
|
|
additional SSH arguments (that would be applied to the |
|
Several other customization parameters are available in config.sh
,
such as running against an external/existing database or Elasticsearch server (instead of starting in-cluster pods),
disabling statistics for resource conservation, etc., described by commets inside config.sh
itself.
Example 1: A four-node cluster (node1
-node4
; node1
being the master) that uses both MySQL and Elasticsearch in-cluster:
MASTER=user1@node1
NODES=(user2@node2 user3@node3 user4@node4)
# (other parameters unchanged)
Example 2: A three-node cluster (with hostnames node1
-node3
; node1
being the master)
that runs against an external MySQL DB (on port 9906
at mysqlhost
) hosting a custom database ips_database
accessible by MySQL user ips_admin
with password $3cU4eQ@sZ
,
and statistics disabled (to avoid the resource overhead of the associated Elasticsearch server instance):
MASTER=user1@node1
NODES=(user2@node2 user3@node3)
DB_IN_CLUSTER=false
DB_URL=jdbc:mysql://mysqlhost:9906/ips_database?useSSL=false
DB_USER=ips_admin
DB_PASS=$3cU4eQ@sZ
ES_ENABLED=false
# (other parameters unchanged)
IPS has been verified against MySQL 5.7.x and Oracle 12c databases, and it may work with other versions as well. The installer currently defaults to MySQL, but you can always either modify it yourself to support a different DBMS or contact us for a modified version for your preferred DBMS. |
If you use a custom database configuration, the configured DB user should have admin access to the allocated database (including table create/drop permissions in addition to data manipulation). This is required because IPS self-initializes the database during its first run. If you are unable to provision such a user, contact us so that we can send you a preconfigured DDL (SQL database initialization script) that you can run via a privileged user. |
Launch deploy.sh
in the installer root:
$INSTALLER_HOME$
./deploy.sh
Initially the installer will run a basic sanity check of the config.sh
parameters,
and installation will be halted if any misconfigurations are detected.
In that case, please revise config.sh
and make the necessary adjustments.
You can always contact us or email us
for more specific details regarding any issues encountered.
The installer will also obtain your consent for obtaining the necessary Docker images from Docker Hub and for downloading the MySQL Java client distribution.
The installer will prepare each node, and deploy the IPS infrastructure components on the master node. Depending on the nature of superuser access (e.g. passwordless), you may want to manually enter passwords for privilege escalation during set-up of each of the worker nodes.
Once deploy.sh
completes execution successfully, you have your IPS cluster up and running. Congratulations!
Access the IPS dashboard using the URL linkplain:`https://<hostname or IP of any of the worker nodes>:30080`.
It may take a few minutes for the dashboard to become available, as the initial system bootstrap operations (populating the database, initializing default settings etc.) need to be performed. |
Your browser may complain about an "insecure" or "non-private" connection when visiting the dashboard URL; this is expected as the evaluation version uses a self-signed certificate for the dashboard, and is not a major concern because you are accessing a cluster-local webapp (i.e. your traffic would not reach outside the K8s network). Simply follow the browser-specific instructions to mark the site as trusted (e.g. "Add Exception" in Firefox or "Proceed to this site" in Chrome) to bypass the warning. |
Log in to the dashboard using the default username-password pair admin:admin
.
When defining the container image
for your deployments under the IPS native installation,
you must use 17.07.2-SNAPSHOT as the tag, instead of the default 17.07 tag mentioned in the documentation and samples.
|
In deploying samples, while you earlier had to use the host-only address of the VM-based IPS node for accessing the deployed services, now you can use the hostname of any worker node in the K8s cluster, similar to any other K8s-based application deployment. |
What follows is a rough outline of the changes that the installer would perform on top of your K8s cluster.
If the installer fails at some point, and teardown.sh
somehow seems to be incapable of
reverting the partially performed changes,
you will be able to manually revert these operations to bring your K8s cluster to its former state.
(If such a scenario arises, please write to us with the encountered issue
so that we can investigate the issue and take possible actions to improve the installer.)
download and extract the MySQL Java connector JARs into /tmp/ips
directory of the system running the installer
on each worker node,
create a /etc/ips
directory
copy the MySQL connector JAR to /etc/ips/lib
copy the licensing-related files to /etc/ips/license
prepare the K8s IPS deployment configurations in /tmp/ips/master
on the system running the installer
copy the prepared configurations to /tmp/ips
on the K8s master node
create namespaces
ips-system
(for admin components) and ips
(for ESB deployments) on the K8s cluster (via master)
create a cluster role and binding
for allowing anonymous retrieval (get
) of node statistics (nodes/stats
), for use in the IPS dashboard
deploy the IPS components
(replication controllers and
services) into the ips-system
namespace
Except for the files being copied (to /etc/ips/ on worker nodes and /tmp/ips/ on master)
and the cluster role bindings being generated for node metrics,
all other changes are isolated to the ips and ips-system namespaces,
and hence can be reverted by simply deleting those two namespaces from the K8s system.
|
Execute teardown.sh
in the installer root in order to delete all resources generated by the installer,
effectively wiping the IPS installation from your K8s cluster (except for any downloaded Docker images).
If teardown.sh
fails for some reason, other components of your K8s cluster should not be affected,
and you can follow the inverse of the installation steps
to clean up your system (or contact us for further support).
The hostnames/IP addresses used in config.sh
should be the same as those being used as node names on the K8s side.
Otherwise the IPS components may fail to recognize the nodes of the cluster (including the master).
We are working on improving the installer so that you can specify separate values for node names and SSH hostnames.
Docker images for IPS management components will start getting downloaded on demand, as and when they are defined on the K8s side. Hence it may take some time for the system to stabilize (i.e. before you can access the dashboard).
Similarly, the UltraESB-X Docker image will be downloaded on a given worker node
only after the first ESB instance gets scheduled on that node,
meaning that you might observe slight delays during the first few ESB cluster deployments.
If necessary, you can avoid this by manually running docker pull adroitlogic/ips-worker:17.07.2-SNAPSHOT
on each worker node.
The installer currently uses that the same SSH parameters (private key etc.) for accessing all K8s nodes,
as indicated by the SSH_ARGS
(3rd) parameter in config.sh
.
If your nodes need to use different private keys, the installation script would need to be modified to facilitate it.
Alternatively if your nodes allow password-based log-in, you can simply leave SSH_ARGS
blank
and enter the password for each node manually while the installer is running.
Your IPS cluster will be expandable up to 20 physical nodes (the default supported by the evaluation license).
If your actual K8s cluster is larger than 20 nodes, you may experience some failures in your cluster deployments as K8s may attempt to deploy some UltraESB-X instances on nodes for which a license cannot be issued (due to them being beyond the 20-instance limit). In such cases you can use the node group feature to create a node group with 20 instances (those for which licenses have been successfully issued), and select that node group (instead of the default, all-nodes group) during future deployments. If you wish to evaluate a larger deployment, feel free to write or email us so that we can evaluate your scenario and provision the needful.
You will have unrestricted access to all IPS features, until the end of your evaluation period (60 days by default). If you wish to extend your evaluation period, feel free to contact us or email us with the details of your requirement, and we would be more than happy to assist you.