Apache Falcon provides various tools to operationalize Falcon consisting of Alerts for unrecoverable errors, Audits of user actions, Metrics, and Notifications. They are detailed below.
++ Lineage
Currently Lineage has no way to access or restore information about entity instances created during the time lineage was disabled. Information about entities however, is preserved and bootstrapped when lineage is enabled. If you have to reset the graph db then you can delete the graph db files as specified in the startup.properties and restart the falcon. Please note: you will loose all the information about the instances if you delete the graph db.
Falcon provides monitoring of various events by capturing metrics of those events. The metric numbers can then be used to monitor performance and health of the Falcon system and the entire processing pipelines.
Falcon also exposes metrics for titandb
Users can view the logs of these events in the metric.log file, by default this file is created under ${user.dir}/logs/ directory. Users may also extend the Falcon monitoring framework to send events to systems like Mondemand/lwes by implementingorg.apache.falcon.plugin.MonitoringPlugin interface.
The following events are captured by Falcon for logging the metrics:
The metric logged for an event has the following properties:
An example for an event logged for a submit of a new process definition:
2012-05-04 12:23:34,026 {Action:submit, Dimensions:{entityType=process}, Status: SUCCEEDED, Time-taken:97087000 ns}
Users may parse the metric.log or capture these events from custom monitoring frameworks and can plot various graphs or send alerts according to their requirements.
The System notifications are internally generated and used by Falcon to monitor the Falcon orchestrated workflow jobs. By default, Falcon starts an ActiveMQ embedded JMS server on Falcon machine on port 61616 as a daemon. Alternatively, users can make Falcon to use an existing JMS server instead of starting an embedded instance by doing the following 2 steps:
*.broker.url=tcp://jms-server-host:61616
<FALCON-INSTALL-DIR>/bin/falcon-start -Dfalcon.embeddedmq=false
Falcon uses FALCON.ENTITY.TOPIC to publish system notifications. This topic and the Map Message fields are internal and could change between releases.
Falcon, in addition to the FALCON.ENTITY.TOPIC, also creates a JMS topic for every process/feed that is scheduled in Falcon as part of User notification. To enable User notifications, the broker url and implementation class of the JMS engine need to be specified in the cluster definition associated with the feed/process. Users may register consumers on the required topic to check the availability or status of feed instances. The User notification JMS broker instance can be same as the System notification or different.
The name of the JMS topic is same as the process/feed name. Falcon sends a map message for every feed instance that is created/deleted/replicated/imported/exported to the JMS topic. The JMS Map Message sent to a topic has the following fields:
The JMS messages are automatically purged after a certain period (default 3 days) by the Falcon JMS house-keeping service. TTL (Time-to-live) for JMS message can be configured in the Falcon's startup.properties file.
The following example shows how to enable and read user notification by connecting to the JMS broker.
First, specify the JMS broker url in the cluster definition XML as shown below.
<?xml version="1.0"?> <!-- filename : primaryCluster.xml --> <cluster colo="USWestOregon" description="oregonHadoopCluster" name="primaryCluster" xmlns="uri:falcon:cluster:0.1"> <interfaces> ... ... <interface type="messaging" endpoint="tcp://user-jms-broker-host:61616?daemon=true" version="5.1.6" /> ... </interfaces> </cluster>
Next, use a JMS consumer (example below in Java) to read the message from the topic with the name FALCON.<feed-or-process-name>
import org.apache.activemq.ActiveMQConnectionFactory; import org.apache.activemq.command.ActiveMQMapMessage; import javax.jms.ConnectionFactory; import javax.jms.Connection; import javax.jms.MessageConsumer; import javax.jms.Topic; import javax.jms.Session; import javax.jms.TopicSession; public class FalconUserJMSClient { public static void main(String[] args)throws Exception { // Note: specify the JMS broker URL String brokerUrl = "tcp://localhost:61616"; ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(brokerUrl); Connection connection = connectionFactory.createConnection(); connection.setClientID("Falcon User JMS Consumer"); TopicSession session = (TopicSession) connection.createSession(false, Session.AUTO_ACKNOWLEDGE); try { // Note: the topic name for the feed will be FALCON.<feed-name> Topic falconTopic = session.createTopic("FALCON.feed-sample"); MessageConsumer consumer = session.createConsumer(falconTopic); connection.start(); while (true) { ActiveMQMapMessage msg = (ActiveMQMapMessage) consumer.receive(); System.out.println("cluster : " + msg.getString("cluster")); System.out.println("entityType : " + msg.getString("entityType")); System.out.println("entityName : " + msg.getString("entityName")); System.out.println("nominalTime : " + msg.getString("nominalTime")); System.out.println("operation : " + msg.getString("operation")); System.out.println("feedNames : " + msg.getString("feedNames")); System.out.println("feedInstancePaths : " + msg.getString("feedInstancePaths")); System.out.println("workflowId : " + msg.getString("workflowId")); System.out.println("workflowUser : " + msg.getString("workflowUser")); System.out.println("runId : " + msg.getString("runId")); System.out.println("status : " + msg.getString("status")); System.out.println("timeStamp : " + msg.getString("timeStamp")); System.out.println("logDir : " + msg.getString("logDir")); System.out.println("brokerUrl : " + msg.getString("brokerUrl")); System.out.println("brokerImplClass : " + msg.getString("brokerImplClass")); System.out.println("logFile : " + msg.getString("logFile")); System.out.println("topicName : " + msg.getString("topicName")); System.out.println("brokerTTL : " + msg.getString("brokerTTL")); } } finally { if (session != null) { session.close(); } if (connection != null) { connection.close(); } } } }
Falcon generates two type of alerts:
1. By default it logs unrecoverable errors into a log file Users can view these alerts in the alerts.log file, by default this file is created under ${user.dir}/logs/ directory.
Users may also extend the Falcon Alerting plugin to send events to systems like Nagios, etc. by extending org.apache.falcon.plugin.AlertingPlugin interface.
2. Alerts on SLA misses for feeds and process is detailed in Entity SLA Alerting.
Falcon audits all user activity and captures them into a log file by default. Users can view these audits in the audit.log file, by default this file is created under ${user.dir}/logs/ directory.
Users may also extend the Falcon Audit plugin to send audits to systems like Apache Argus, etc. by extending org.apache.falcon.plugin.AuditingPlugin interface.
Falcon has support to send process metrics like waiting time ,exection time and number of failures to graphite and falcon db.
For details go through Metric Collection