-
Notifications
You must be signed in to change notification settings - Fork 64
Plugin Scripts
Sal Plugins can include any number of scripts which will be downloaded by clients running supported (according to the plugin's get_supported_os_families()
result) operating systems. The scripts are downloaded during the Munki run's preflight stage, and then executed during the postflight stage.
Create a scripts
directory in your plugin folder which includes any scripts you desire.
Scripts must have a functioning shebang line (i.e. #!/usr/bin/python
) for the client machine's environment.
The information sent back to the server can either be refreshed every time the client checks in, or if can be stored up to the retention limit (historical data
).
Plugin scripts append their information to a results plist file, which is then submitted by the postflight script. To facilitate this proces, you can import the sal-scripts utilities module and call the add_plugin_results
function.
For example:
# ...
import sys
# Python won't know how to find the Sal-scripts utilities unless we add the folder to the pythonpath.
sys.path.append('/usr/local/sal')
import sal.utils
# Do some stuff...
data = process_some_stuff()
# Add results to the results plist
add_plugin_results('Plugin Name', data)
The data parameter to add_plugin_results should be a dictionary. Starting in Sal 3.0.0, when your data is submitted, Sal will attempt to save it in various native fields, so you can perform more efficient queries. These fields are:
pluginscript_data_string = models.TextField(blank=True, null=True, db_index=True)
pluginscript_data_int = models.IntegerField(default=0)
pluginscript_data_date = models.DateTimeField(blank=True, null=True)
Otherwise, all values will be cast to string, so if you need to convert complex data, it needs to be done here prior to calling add_plugin_results
.
By default, add_plugin_results
will replace any existing data in the database with the newest submission. If you use the optional historical
keyword argument (set to True), you can tell Sal to keep historical values as well. By marking your data as historical, it will be stored in the database until the retention limit configured in Settings
is reached (by default 180 days).
Plugins can then access plugin results through the PluginScriptSubmission
and PluginScriptRow
models.
Please keep in mind that you must code your processing of these results for the possibility that not all machines will have submitted values yet. For example, the Encryption plugin handles results for enabled and disabled encryption status, as well as machines which haven't submitted values (unknown).
In most cases, you probably want to use plugin script data to filter machines (since plugins are primarily focused around grouping and counting properties of machines). If you just want to get at plugin script data, the formula is to do:
from server.models import PluginScriptSubmission, PluginScriptRow
# Get all plugin data for MunkiInfo:
munki_info_submissions = PluginScriptSubmission.objects.filter(plugin='MunkiInfo')
for submission in munki_info_submissions:
# One submission from one machine
data = submission.pluginscriptrow_set.values_list('pluginscript_name', 'pluginscript_data')
# do something with that data...
# To get values of a single plugin script data key per hostname:
values = (
PluginScriptSubmission.objects
.filter(plugin="MunkiInfo")
.filter(pluginscriptrow__pluginscript_name='InstallAppleSoftwareUpdates')
.values_list('machine__hostname', 'pluginscriptrow__pluginscript_data'))
# Just get one plugin data key's values:
values = (
PluginScriptSubmission.objects
.filter(plugin="MunkiInfo")
.filter(pluginscriptrow__pluginscript_name='InstallAppleSoftwareUpdates')
.values_list('pluginscriptrow__pluginscript_data', flat=True))
# etc.
Queries will all generally follow this pattern: filter PluginScriptSubmission
by a plugin name, and then pull out individual key/value pairs from the PluginScriptRow
model (through its related field pluginscriptrow
).
If this is all too wild and you're having trouble locating data, take a look at the Sal admin site to browse through the database records, and it's time to familiarize yourself with the Django queryset api and models intro.
Using plugin data to filter machines, as mentioned before, is probably the primary use of plugin script data. The Sal MunkiInfo plugin demonstrates a method for drilling down through the related fields of the Machine model to count the different possible values:
class MunkiInfo(sal.plugin.Widget):
#...
def get_context(self, queryset, **kwargs):
context = self.super_get_context(queryset, **kwargs)
# ...
# HTTP only machines
context['http_only'] = (
queryset
.filter(
pluginscriptsubmission__plugin='MunkiInfo',
pluginscriptsubmission__pluginscriptrow__pluginscript_name='SoftwareRepoURL',
pluginscriptsubmission__pluginscriptrow__pluginscript_data__startswith='http://').count())
#... further queries follow
The above snippet filters the machine queryset to only include machines which match all three of the filter
arguments:
- The pluginscriptsubmission is from a plugin named 'MunkiInfo'. (Sub in the name of your plugin).
- The name of the returned data (the key name for the data dictionary your script submits) is named "SoftwareRepoURL".
- The data value (the value of the SoftwareRepoURL item in the script's data dict) startswith 'http://'.
Finally, the queryset api method count
is used to set the context's 'http_only' value to the number of machines which match those criteria.
Rather than repeat quite long queryset field lookups, you can use the Django Q
class to reuse them, as in the actual MunkInfo plugin's code:
# Abbreviated...
from django.db.models import Q
REPORT_Q = Q(pluginscriptsubmission__plugin='MunkiInfo')
# Ignore this line... Nothing to see here.
URL_QS = {k: Q(pluginscriptsubmission__pluginscriptrow__pluginscript_name=k + 'URL') for k in URLS}
#...
def get_http_only(self, machines):
return machines.filter(
REPORT_Q, URL_QS['SoftwareRepo'],
pluginscriptsubmission__pluginscriptrow__pluginscript_data__startswith='http://')
def get_context(self, machines, group_type=None, group_id=None):
context = self.super_get_context(machines, group_type=group_type, group_id=group_id)
context['http_only'] = self.get_http_only(machines).count()
- Brute force protection
- LDAP integration
- Active Directory integration
- API
- Usage reporting
- License Management
- Maintenance
- Search
- Troubleshooting
- SAML
- IAM Authentication for AWS RDS Postgres
- Docker
- Ubuntu 14.04
- Ubuntu 16.04
- RHEL 7
- Kubernetes
- Heroku?