When TIBCO StreamBase® Runtime detects a deadlock a detailed trace is sent to the node's deadlock.log file. The deadlock trace shows information about the transacation that deadlocked, which resource deadlocked, transaction stacks, thread stack traces, and other transactions involved in the deadlock.
A lock order deadlock can occur when two or more transactions lock the same two or more objects in different orders. An illustration of this can be found in the Deadlock Detection section of the Architects Guide.
The program below will generate a single transaction lock ordering deadlock between two threads, running in a single JVM, in a single node.
package com.tibco.ep.dtm.snippets.tuning; import com.kabira.platform.Transaction; import com.kabira.platform.annotation.Managed; /** * Deadlock Example from the Tuning Guide. * */ public class Deadlock { private static MyManagedObject object1; private static MyManagedObject object2; /** * Main entry point * @param args Not used * @throws InterruptedException Execution interrupted */ public static void main(String[] args) throws InterruptedException { // // Create a pair of Managed objects. // new Transaction("Create Objects") { @Override public void run() { object1 = new MyManagedObject(); object2 = new MyManagedObject(); } }.execute(); // // Create a pair of transaction classes to lock them. // Giving the object parameters in reverse order will // cause two different locking orders, resulting in a deadlock. // Deadlocker deadlocker1 = new Deadlocker(object1, object2); Deadlocker deadlocker2 = new Deadlocker(object2, object1); // // Run them in separate threads until a deadlock is seen. // while ((deadlocker1.getNumberDeadlocks() == 0) && (deadlocker2.getNumberDeadlocks() == 0)) { MyThread thread1 = new MyThread(deadlocker1); MyThread thread2 = new MyThread(deadlocker2); thread1.start(); thread2.start(); thread1.join(); thread2.join(); } } @Managed private static class MyManagedObject { int value; } private static class MyThread extends Thread { private final Deadlocker m_deadlocker; MyThread(Deadlocker deadlocker) { m_deadlocker = deadlocker; } @Override public void run() { m_deadlocker.execute(); } } private static class Deadlocker extends Transaction { private final MyManagedObject m_object1; private final MyManagedObject m_object2; Deadlocker(MyManagedObject object1, MyManagedObject object2) { m_object1 = object1; m_object2 = object2; } @Override public void run() { // // This will take a transaction read lock on the first object. // @SuppressWarnings("unused") int value = m_object1.value; // // Wait a while to maximize the possibility of contention. // blockForAMoment(); // // This will take a transaction write lock on the second object. // m_object2.value = 42; // // Wait a while to maximize the possibility of contention. // blockForAMoment(); } private void blockForAMoment() { try { Thread.sleep(500); } catch (InterruptedException ex) { } } } }
The program will generate a deadlock trace into the deadlock.log file, similar to the following annotated trace shown below.
A deadlock trace begins with a separator.
============================================================
Followed by a timestamp and a short description of the deadlock.
2016-06-17 11:02:22.746084 Deadlock detected in transaction 109:1 by engine application::com_intellij_rt_execution_application_AppMain1 running on node A.snippets.
Next there is more detailed information about the deadlock transaction.
TransactionID = 109:1 Node = A.snippets Name = com.tibco.ep.dtm.snippets.tuning.Deadlock$Deadlocker Begin Time = 2016-06-17 11:02:22.245182 State = deadlocked
Followed by a description of the object and lock type for the deadlock. This example shows that the deadlock occurred in transaction 109:1 attempting to take a write lock on object ...MyManagedObject:43.
Lock Type = write lock Target Object = com.tibco.ep.dtm.snippets.tuning.Deadlock$MyManagedObject:43 (3184101770:178056336:270224610788623:43)
Followed by a list of transaction locks held on the target object at the time of the deadlock are shown. This example shows that transaction 108:1 has a read lock on the target object.
Locks on Target Object: read lock held by transaction 108:1 Number of Target Object Write Lock Waiters = 0
Next is a list of locks held by the deadlock transaction. Note that this example shows the deadlock transaction holding a read lock on ...MyManagedObject:39.
Locks held by transaction 109:1: com.tibco.ep.dtm.snippets.tuning.Deadlock$MyManagedObject:39 (3184101770:178056336:270224610788623:39) read lock
The next section shows a transaction "callstack" for the deadlock transaction. A transaction callstack contains transaction life cycle entries and entries showing the transaction's thread/process usage. A transaction callstack is read from bottom to top and always starts with a begin transaction entry. This example shows a transaction that deadlocked while using a single thread (thread ID 28488, engine 107).
Transaction callstack for 109:1: TranId Engine ThreadId Method 109:1 107 28488 deadlock on com.tibco.ep.dtm.snippets.tuning.Deadlock$MyManagedObject:43 109:1 107 28488 begin transaction
Next are thread stack traces for each of the threads being used by the transaction at the time of the deadlock.
Thread stack traces are read from bottom to top.
Thread stacks for transaction 109:1: TranId Engine ThreadId Stack type Method 109:1 107 28488 Java com.kabira.platform.NativeRuntime.setInteger(Native Method) 109:1 107 28488 Java com.tibco.ep.dtm.snippets.tuning.Deadlock$Deadlocker.run(Deadlock.java:115) 109:1 107 28488 Java com.kabira.platform.Transaction.execute(Transaction.java:478) 109:1 107 28488 Java com.kabira.platform.Transaction.execute(Transaction.java:560) 109:1 107 28488 Java com.tibco.ep.dtm.snippets.tuning.Deadlock$MyThread.run(Deadlock.java:81)
The next section shows list of engines installed in the node and their IDs. This maps to the Engine column in the transaction and thread sections.
Engines installed on node A.snippets: ID Name 100 System::swcoordadmin 101 System::kssl 102 System::administration 103 Dtm::distribution 107 application::com_intellij_rt_execution_application_AppMain1
The next sections show the same transaction information (when available) for each of the other transactions involved in the deadlock.
Other involved transactions: TransactionID = 108:1 Node = A.snippets Name = com.tibco.ep.dtm.snippets.tuning.Deadlock$Deadlocker Begin Time = 2016-06-17 11:02:22.245172
This section shows that transaction 108:1 is blocked waiting for a write lock on object ...MyManagedObject:39, which is currently held with a read lock by the 109:1, the deadlocked transaction.
State = blocked Lock Type = write lock Target Object = com.tibco.ep.dtm.snippets.tuning.Deadlock$MyManagedObject:39 (3184101770:178056336:270224610788623:39) Locks on Target Object: read lock held by transaction 109:1 Number of Target Object Write Lock Waiters = 1 Transaction callstack for 108:1: TranId Engine ThreadId Method 108:1 107 28489 begin transaction Thread stacks for transaction 108:1: TranId Engine ThreadId Stack type Method 108:1 107 28489 Java com.kabira.platform.NativeRuntime.setInteger(Native Method) 108:1 107 28489 Java com.tibco.ep.dtm.snippets.tuning.Deadlock$Deadlocker.run(Deadlock.java:115) 108:1 107 28489 Java com.kabira.platform.Transaction.execute(Transaction.java:478) 108:1 107 28489 Java com.kabira.platform.Transaction.execute(Transaction.java:560) 108:1 107 28489 Java com.tibco.ep.dtm.snippets.tuning.Deadlock$MyThread.run(Deadlock.java:81) Locks held by transaction 108:1: com.tibco.ep.dtm.snippets.tuning.Deadlock$MyManagedObject:43 (3184101770:178056336:270224610788623:43) read lock
Lock promotion is when a transaction currently holding a read lock on an object attempts to acquire a write lock on the same object (i.e. Promoting the read lock to a write lock). If blocking for this write lock would result in deadlock, it is called a promotion deadlock.
The program below will generate a single promotion deadlock between two threads, running in a single JVM, in a single node.
package com.tibco.ep.dtm.snippets.tuning; import com.kabira.platform.Transaction; import com.kabira.platform.annotation.Managed; /** * Promotion deadlock Example from the Tuning Guide. */ public class PromotionDeadlock { private static MyManagedObject targetObject; /** * Main entry point * @param args Not used * @throws InterruptedException Execution interrupted */ public static void main(String[] args) throws InterruptedException { // // Create a Managed objects. // new Transaction("Create Objects") { @Override public void run() { targetObject = new MyManagedObject(); } }.execute(); // // Create a pair of transaction classes that will both // promote lock the Managed object, resulting in a // promotion deadlock. // Deadlocker deadlocker1 = new Deadlocker(targetObject); Deadlocker deadlocker2 = new Deadlocker(targetObject); // // Run them in separate threads until a deadlock is seen. // while ((deadlocker1.getNumberDeadlocks() == 0) && (deadlocker2.getNumberDeadlocks() == 0)) { MyThread thread1 = new MyThread(deadlocker1); MyThread thread2 = new MyThread(deadlocker2); thread1.start(); thread2.start(); thread1.join(); thread2.join(); } } @Managed private static class MyManagedObject { int value; } private static class MyThread extends Thread { private final Deadlocker m_deadlocker; MyThread(Deadlocker deadlocker) { m_deadlocker = deadlocker; } @Override public void run() { m_deadlocker.execute(); } } private static class Deadlocker extends Transaction { private final MyManagedObject m_targetObject; Deadlocker(MyManagedObject targetObject) { m_targetObject = targetObject; } @Override public void run() { // // This will take a transaction read lock on the object. // @SuppressWarnings("unused") int value = m_targetObject.value; // // Wait a while to maximize the possibility of contention. // blockForAMoment(); // // This will take a transaction write lock on the object // (promoting the read lock). // m_targetObject.value = 42; // // Wait a while to maximize the possibility of contention. // blockForAMoment(); } private void blockForAMoment() { try { Thread.sleep(500); } catch (InterruptedException ex) { } } } }
The trace messages are similar to those show in the previous section for a lock order deadlock, with the difference being that promotion deadlock will be mentioned:
============================================================ 2016-06-17 10:52:46.948868 Deadlock detected in transaction 86:1 by engine application::com_intellij_rt_execution_application_AppMain0 running on node A.snippets. TransactionID = 86:1 Node = A.snippets Name = com.tibco.ep.dtm.snippets.tuning.PromotionDeadlock$Deadlocker Begin Time = 2016-06-17 10:52:46.448477 State = deadlocked Lock Type = promote lock Target Object = com.tibco.ep.dtm.snippets.tuning.PromotionDeadlock$MyManagedObject:11 (3184101770:8762792:270224610788623:11) Locks on Target Object: read lock (and promote waiter) held by transaction 85:1 read lock held by transaction 86:1 Number of Target Object Write Lock Waiters = 0 Locks held by transaction 86:1: com.tibco.ep.dtm.snippets.tuning.PromotionDeadlock$MyManagedObject:11 (3184101770:8762792:270224610788623:11) read lock Transaction callstack for 86:1: TranId Engine ThreadId Method 86:1 105 27318 promotion deadlock on com.tibco.ep.dtm.snippets.tuning.PromotionDeadlock$MyManagedObject:11 86:1 105 27318 begin transaction Thread stacks for transaction 86:1: TranId Engine ThreadId Stack type Method 86:1 105 27318 Java com.kabira.platform.NativeRuntime.setInteger(Native Method) 86:1 105 27318 Java com.tibco.ep.dtm.snippets.tuning.PromotionDeadlock$Deadlocker.run(PromotionDeadlock.java:116) 86:1 105 27318 Java com.kabira.platform.Transaction.execute(Transaction.java:478) 86:1 105 27318 Java com.kabira.platform.Transaction.execute(Transaction.java:560) 86:1 105 27318 Java com.tibco.ep.dtm.snippets.tuning.PromotionDeadlock$MyThread.run(PromotionDeadlock.java:83) Engines installed on node A.snippets: ID Name 100 System::swcoordadmin 101 System::kssl 102 System::administration 103 Dtm::distribution 105 application::com_intellij_rt_execution_application_AppMain0 Other involved transactions: TransactionID = 85:1 Node = A.snippets Name = com.tibco.ep.dtm.snippets.tuning.PromotionDeadlock$Deadlocker Begin Time = 2016-06-17 10:52:46.448434 State = blocked Lock Type = promote lock Target Object = com.tibco.ep.dtm.snippets.tuning.PromotionDeadlock$MyManagedObject:11 (3184101770:8762792:270224610788623:11) Locks on Target Object: read lock (and promote waiter) held by transaction 85:1 read lock held by transaction 86:1 Number of Target Object Write Lock Waiters = 0 Transaction callstack for 85:1: TranId Engine ThreadId Method 85:1 105 27317 begin transaction Thread stacks for transaction 85:1: TranId Engine ThreadId Stack type Method 85:1 105 27317 Java com.kabira.platform.NativeRuntime.setInteger(Native Method) 85:1 105 27317 Java com.tibco.ep.dtm.snippets.tuning.PromotionDeadlock$Deadlocker.run(PromotionDeadlock.java:116) 85:1 105 27317 Java com.kabira.platform.Transaction.execute(Transaction.java:478) 85:1 105 27317 Java com.kabira.platform.Transaction.execute(Transaction.java:560) 85:1 105 27317 Java com.tibco.ep.dtm.snippets.tuning.PromotionDeadlock$MyThread.run(PromotionDeadlock.java:83) Locks held by transaction 85:1: com.tibco.ep.dtm.snippets.tuning.PromotionDeadlock$MyManagedObject:11 (3184101770:8762792:270224610788623:11) read lock
The previous examples showed simple deadlocks, occurring between two transactions. More complex deadlocks are possible involving more than two transactions. For example, transaction 1 deadlocks trying to acquire a lock on an object held by transaction 2 who is blocked waiting on a object held by transaction 3.
To aid in analyzing complex deadlocks the following will be found in the trace messages:
For each contended object, a display of the locks is included, including any promotion waiters.
If the runtime detects that a deadlock happens due to a read lock being blocked, it includes the transaction blocked waiting for the promotion.
Single node deadlocks are bad for performance because they are a source of contention, leading to lower throughput, higher latency and higher CPU cost. But the deadlocks are detected immediately, because each node has a built in transaction lock manager.
Distributed deadlocks are extremely bad for performance because they use a timeout mechanism for deadlock detection. The default setting for this timeout is 60 seconds in a production build.
The program below will generate a distributed transaction lock ordering deadlock between two transactions running across multiple nodes.
package com.tibco.ep.dtm.snippets.tuning; import com.kabira.platform.Transaction; import com.kabira.platform.annotation.Managed; import com.kabira.platform.highavailability.PartitionManager; import com.kabira.platform.highavailability.PartitionManager.EnableAction; import com.kabira.platform.highavailability.PartitionMapper; import com.kabira.platform.highavailability.ReplicaNode; import static com.kabira.platform.highavailability.ReplicaNode.ReplicationType.*; import com.kabira.platform.property.Status; /** * Distributed deadlock example from the Tuning Guide * <h2> Target Nodes</h2> * <ul> * <li> <b>servicename</b>=snippets * </ul> * Note this sample blocks on B.snippet and C.snippet nodes, * and needs to be explicitly stopped. */ public class DistributedDeadlock { private static TestObject object1; private static TestObject object2; private static final String nodeName = System.getProperty(Status.NODE_NAME); private static final String NODE_A = "A.snippets"; private static final String NODE_B = "B.snippets"; private static final String NODE_C = "C.snippets"; /** * Main entry point * @param args Not used * @throws InterruptedException Execution interrupted */ public static void main(String[] args) throws InterruptedException { // // Install a partition mapper on each node // AssignPartitions.installPartitionMapper(); // // Block all but the A node. // new NodeChecker().blockAllButA(); // // Define the partitions to be used by this snippet // new PartitionCreator().createPartitions(); // // Create a pair of objects, one active on node B, // and the other active on node C. // new Transaction("Create Objects") { @Override public void run() { object1 = new TestObject(); object2 = new TestObject(); // // For each distributed object, assign it a // reference to the other. // object1.otherObject = object2; object2.otherObject = object1; } }.execute(); // // Create a pair of objects, one active on node B, // and the other active on node C. // new Transaction("Spawn Deadlockers") { @Override public void run() { // // Ask them each to spawn a Deadlocker thread. // This should execute on node B for one of them // and node C for the other. // object1.spawnDeadlocker(); object2.spawnDeadlocker(); } }.execute(); // // Now block main in the A node to keep the JVM from exiting. // new NodeChecker().block(); } private static class PartitionCreator { void createPartitions() { new Transaction("Partition Definition") { @Override protected void run() throws Rollback { // // Set up the node lists - notice that the odd node list // has node B as the active node, while the even // node list has node C as the active node. // ReplicaNode [] evenReplicaList = new ReplicaNode [] { new ReplicaNode(NODE_C, SYNCHRONOUS), new ReplicaNode(NODE_A, SYNCHRONOUS) }; ReplicaNode [] oddReplicaList = new ReplicaNode [] { new ReplicaNode(NODE_B, SYNCHRONOUS), new ReplicaNode(NODE_A, SYNCHRONOUS) }; // // Define two partitions // PartitionManager.definePartition("Even", null, NODE_B, evenReplicaList); PartitionManager.definePartition("Odd", null, NODE_C, oddReplicaList); // // Enable the partitions // PartitionManager.enablePartitions( EnableAction.JOIN_CLUSTER_PURGE); } }.execute(); } } // // Partition mapper that maps objects to either Even or Odd // private static class AssignPartitions extends PartitionMapper { private Integer m_count = 0; @Override public String getPartition(Object obj) { this.m_count++; String partition = "Even"; if ((this.m_count % 2) == 1) { partition = "Odd"; } return partition; } static void installPartitionMapper() { new Transaction("installPartitionMapper") { @Override protected void run() { // // Install the partition mapper // PartitionManager.setMapper( TestObject.class, new AssignPartitions()); } }.execute(); } } @Managed private static class TestObject { TestObject otherObject; @SuppressWarnings("unused") private String m_data; public void lockObjects() { Transaction.setTransactionDescription("locking first object"); doWork(); // // Delay longer on the B node to try to force the deadlock // to occur on the C. Otherwise, both sides could see // deadlocks at the same time, making the log files less clear // for this snippet. // if (nodeName.equals(NODE_B)) { block(10000); } else { block(500); } Transaction.setTransactionDescription("locking second object"); otherObject.doWork(); block(500); } public void spawnDeadlocker() { new DeadlockThread(this).start(); } private void block(int milliseconds) { try { Thread.sleep(milliseconds); } catch (InterruptedException ex) { } } private void doWork() { m_data = "work"; } } private static class DeadlockThread extends Thread { private final Transaction m_deadlockTransaction; DeadlockThread(TestObject object) { m_deadlockTransaction = new DeadlockTransaction("DeadlockThread", object); } @Override public void run() { while (true) { if (m_deadlockTransaction.execute() == Transaction.Result.ROLLBACK) { return; } } } } private static class DeadlockTransaction extends Transaction { private final TestObject m_object; DeadlockTransaction(final String name, TestObject object) { super(name); m_object = object; } @Override public void run() throws Rollback { if (getNumberDeadlocks() != 0) { System.out.println("A deadlock has been seen, " + "you may now stop the distributed application"); throw new Transaction.Rollback(); } m_object.lockObjects(); } } private static class NodeChecker { // // If we are not the A node, block here forever // void blockAllButA() { while (!nodeName.equals(NODE_A)) { block(); } } public void block() { while (true) { try { Thread.sleep(500); } catch (InterruptedException ex) { } } } } }
The program should produce a deadlock that is processed on node C, and found in the node C deadlock.log file, looking similar to:
============================================================
The deadlock trace is generated on the node where the distributed transaction was started. This is not the node where the deadlock timeout occurred.
2016-06-17 11:51:32.618439 Global transaction deadlock processed on by engine Dtm::distribution running on node C.snippets in transaction 141:1 TransactionID = 141:1 GlobalTransactionID = serializable:3080819280765915:141:1:272780508690721 Node = C.snippets Name = DeadlockThread Description = locking second object Begin Time = 2016-06-17 11:50:31.830473 State = distributed deadlock Locks held by transaction 141:1: com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject:46 (3184101770:3037728096:270224610788623:46) write lock Transaction callstack for 141:1: TranId Engine ThreadId Method 141:1 103 30698 distribution calling com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject.$doWorkImpl()V on com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject:60 141:1 103 30698 dispatch calling [distributed dispatch] on com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject:60 141:1 109 32695 begin transaction Thread stacks for transaction 141:1: TranId Engine ThreadId Stack type Method 141:1 109 32695 Java com.kabira.platform.NativeRuntime.sendTwoWay(Native Method) 141:1 109 32695 Java com.kabira.platform.NativeRuntime.sendTwoWay(NativeRuntime.java:111) 141:1 109 32695 Java com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject.doWork(DistributedDeadlock.java) 141:1 109 32695 Java com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject.$lockObjectsImpl(DistributedDeadlock.java:207) 141:1 109 32695 Java com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject.lockObjects(DistributedDeadlock.java) 141:1 109 32695 Java com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$DeadlockTransaction.run(DistributedDeadlock.java:279) 141:1 109 32695 Java com.kabira.platform.Transaction.execute(Transaction.java:478) 141:1 109 32695 Java com.kabira.platform.Transaction.execute(Transaction.java:560) 141:1 109 32695 Java com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$DeadlockThread.run(DistributedDeadlock.java:250) 141:1 103 30698 Native SWProcessManager::stackTrace() 141:1 103 30698 Native OSDispStackTraceNotifier::stackTrace() 141:1 103 30698 Native OSCallstack::collectCallstack() 141:1 103 30698 Native OSDeadlockReport::loadThreadStacks() 141:1 103 30698 Native OSDeadlockReport::distributedDeadlockReport() 141:1 103 30698 Native CSComm::handleDeadlockError() 141:1 103 30698 Native CSComm::handleRetryableError() 141:1 103 30698 Native CSComm::sendTwoWay() 141:1 103 30698 Native CSMetaDispatcher() 141:1 103 30698 Native OSDispChannel::callTwoWay() 141:1 103 30698 Native OSDispChannel::callDispatchFunc() 141:1 103 30698 Native OSThreadedDispChannel::dispatchUserEvent() 141:1 103 30698 Native OSThreadedDispChannel::start() 141:1 103 30698 Native startFunction() 141:1 103 30698 Native clone Engines installed on node C.snippets: ID Name 100 System::swcoordadmin 101 System::kssl 102 System::administration 103 Dtm::distribution 109 application::com_intellij_rt_execution_application_AppMain2
Next comes information from the remote node, where the deadlock timeout occurred.
Remote deadlock information: com.kabira.ktvm.transaction.DeadlockError: 2016-06-17 11:51:32.363282 Deadlock detected in transaction 139:4 by engine application::com_intellij_rt_execution_application_AppMain2 running on node B.snippets. TransactionID = 139:4 GlobalTransactionID = serializable:3080819280765915:141:1:272780508690721 Node = B.snippets Begin Time = 2016-06-17 11:50:32.336391 State = time out, distributed deadlock Lock Type = write lock Target Object = com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject:60 (3184101770:3037728096:270224610788623:60) Locks on Target Object: write lock held by transaction 144:1 Number of Target Object Write Lock Waiters = 1 Locks held by transaction 139:4: com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject:46 (3184101770:3037728096:270224610788623:46) write lock Transaction callstack for 139:4: TranId Engine ThreadId Method 139:4 109 32600 distributed deadlock on com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject:60 139:4 109 32600 dispatch calling com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject.$doWorkImpl()V on com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject:60 139:4 103 32029 begin transaction Thread stacks for transaction 139:4: TranId Engine ThreadId Stack type Method 139:4 103 32029 Native SWQCB::queueTwoWayEvent() 139:4 103 32029 Native SWEventChan::sendTwoWayEvent() 139:4 103 32029 Native OSDispatch::sendTwoWayViaEventBus() 139:4 103 32029 Native OSDispatch::sendTwoWayRequest() 139:4 103 32029 Native CSReadChannel::processTwoWayRequest() 139:4 103 32029 Native CSReadChannel::processRequest() 139:4 103 32029 Native CSNetReader::execute() 139:4 103 32029 Native SWEngineThreadHandler::start() 139:4 103 32029 Native startFunction() 139:4 103 32029 Native clone 139:4 109 32600 Java com.kabira.platform.NativeRuntime.setReference(Native Method) 139:4 109 32600 Java com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject.$doWorkImpl(DistributedDeadlock.java:230) Engines installed on node B.snippets: ID Name 100 System::swcoordadmin 101 System::kssl 102 System::administration 103 Dtm::distribution 109 application::com_intellij_rt_execution_application_AppMain2 Other involved transactions: TransactionID = 144:1 GlobalTransactionID = serializable:3124420528571642:144:1:272698692647770 Node = B.snippets Name = DeadlockThread Description = locking second object Begin Time = 2016-06-17 11:50:31.839979 State = state not available, transaction may be running Transaction callstack for 144:1: TranId Engine ThreadId Method 144:1 103 30462 distribution calling com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject.$doWorkImpl()V on com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject:46 144:1 103 30462 dispatch calling [distributed dispatch] on com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject:46 144:1 109 32696 begin transaction Locks held by transaction 144:1: com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject:60 (3184101770:3037728096:270224610788623:60) write lock at com.kabira.platform.NativeRuntime.setReference(Native Method) at com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject.$doWorkImpl(DistributedDeadlock.java:230)
Included also from the remote node is a list of all tranasactions on the node that were blocked at the time of the deadlock.
All local blocked transactions on node B.snippets: Transaction [serializable:3124420528571642:144:1:272698692647770, tid 30718], started at 2016-06-17 11:50:41.841842, is blocked waiting for a write lock on com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject:46 (3184101770:3037728096:270224610788623:46) locks write { 'DeadlockThread'[serializable:3080819280765915:141:1:272780508690721, tid 32695, locking second object] } {1 write waiters } Transaction callstack for transaction 142:1: Engine 103 Thread 30718 begin transaction Engine 109 Thread 32642 dispatch calling com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject.$doWorkImpl()V on com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject:46 Objects currently locked in transaction [serializable:3124420528571642:144:1:272698692647770, tid 30718] com.tibco.ep.dtm.snippets.tuning.DistributedDeadlock$TestObject:60 (3184101770:3037728096:270224610788623:60) write lock